DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers
@ 2019-12-03  5:51 Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 1/4] net/fm10k: " Chenxu Di
                   ` (11 more replies)
  0 siblings, 12 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-03  5:51 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the drivers inclulding fm10k, i40e, ice, ixgbe
 for the API rte_eth_tx_done_cleanup to
 force free consumed buffers on Tx ring.

Di ChenxuX (4):
  net/fm10k: cleanup Tx buffers
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers

 drivers/net/fm10k/fm10k.h         |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c  |  1 +
 drivers/net/fm10k/fm10k_rxtx.c    | 45 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 drivers/net/ice/ice_ethdev.c      |  1 +
 drivers/net/ice/ice_rxtx.c        | 41 ++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c    | 39 +++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |  2 ++
 13 files changed, 177 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH 1/4] net/fm10k: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
@ 2019-12-03  5:51 ` Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 2/4] net/i40e: " Chenxu Di
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-03  5:51 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the fm10k driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Di ChenxuX <chenxux.di@intel.com>
---
 drivers/net/fm10k/fm10k.h        |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c |  1 +
 drivers/net/fm10k/fm10k_rxtx.c   | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856ac..ddb1d64ec 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -342,6 +342,8 @@ uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 uint16_t fm10k_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t nb_pkts);
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 int fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq);
 int fm10k_rx_vec_condition_check(struct rte_eth_dev *);
 void fm10k_rx_queue_release_mbufs_vec(struct fm10k_rx_queue *rxq);
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index db4d72129..328468185 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -2838,6 +2838,7 @@ static const struct eth_dev_ops fm10k_eth_dev_ops = {
 	.reta_query		= fm10k_reta_query,
 	.rss_hash_update	= fm10k_rss_hash_update,
 	.rss_hash_conf_get	= fm10k_rss_hash_conf_get,
+	.tx_done_cleanup    = fm10k_tx_done_cleanup,
 };
 
 static int ftag_check_handler(__rte_unused const char *key,
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 5c3112183..f67c5bf00 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -541,6 +541,51 @@ static inline void tx_free_bulk_mbuf(struct rte_mbuf **txep, int num)
 	}
 }
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct fm10k_tx_queue *q = (struct fm10k_tx_queue *)txq;
+	uint16_t next_rs, count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	next_rs = fifo_peek(&q->rs_tracker);
+	if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE))
+		return count;
+
+	/* the DONE flag is set on this descriptor so remove the ID
+	 * from the RS bit tracker and free the buffers
+	 */
+	fifo_remove(&q->rs_tracker);
+
+	/* wrap around? if so, free buffers from last_free up to but NOT
+	 * including nb_desc
+	 */
+	if (q->last_free > next_rs) {
+		count = q->nb_desc - q->last_free;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free = 0;
+
+		if (unlikely(count == (int)free_cnt))
+			return count;
+	}
+
+	/* adjust free descriptor count before the next loop */
+	q->nb_free += count + (next_rs + 1 - q->last_free);
+
+	/* free buffers from last_free, up to and including next_rs */
+	if (q->last_free <= next_rs) {
+		count = next_rs - q->last_free + 1;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free += count;
+	}
+
+	if (q->last_free == q->nb_desc)
+		q->last_free = 0;
+
+	return count;
+}
+
 static inline void tx_free_descriptors(struct fm10k_tx_queue *q)
 {
 	uint16_t next_rs, count = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH 2/4] net/i40e: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 1/4] net/fm10k: " Chenxu Di
@ 2019-12-03  5:51 ` Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 3/4] net/ice: " Chenxu Di
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-03  5:51 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Di ChenxuX <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 4 files changed, 43 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 4e40b7ab5..cf35fb5da 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -509,6 +509,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index c77b30c54..b462a9d8c 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 692c3bab4..4296b6195 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2467,6 +2467,46 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+	struct i40e_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 3fc619af9..1a70eda2c 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -204,6 +204,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH 3/4] net/ice: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 1/4] net/fm10k: " Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 2/4] net/i40e: " Chenxu Di
@ 2019-12-03  5:51 ` Chenxu Di
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 4/4] net/ixgbe: " Chenxu Di
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-03  5:51 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the ice driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Di ChenxuX <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |  1 +
 drivers/net/ice/ice_rxtx.c   | 41 ++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |  1 +
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 63997fdfb..617f7b2ac 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -160,6 +160,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 81af81441..f991cb6c0 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -579,6 +579,47 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+	struct ice_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index e9214110c..1ac3f3f91 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -170,6 +170,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH 4/4] net/ixgbe: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (2 preceding siblings ...)
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 3/4] net/ice: " Chenxu Di
@ 2019-12-03  5:51 ` Chenxu Di
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-03  5:51 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Di ChenxuX <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c   | 39 ++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |  2 ++
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 7eb3d0567..255af2290 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -591,6 +591,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -639,6 +640,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index edcfa60ce..7bdb244b0 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,45 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *q = (struct ixgbe_tx_queue *)txq;
+	struct ixgbe_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if (!(q->tx_ring[tx_id].wb.status &
+			IXGBE_TXD_STAT_DD))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..2c3770af6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 0/5] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (3 preceding siblings ...)
  2019-12-03  5:51 ` [dpdk-dev] [PATCH 4/4] net/ixgbe: " Chenxu Di
@ 2019-12-20  3:02 ` Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 1/5] net/fm10k: " Chenxu Di
                     ` (4 more replies)
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
                   ` (6 subsequent siblings)
  11 siblings, 5 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding fm10k, i40e, ice, ixgbe
and igb vf for the API rte_eth_tx_done_cleanup to
 force free consumed buffers on Tx ring.

---
v2:
added code about igb vf.

Chenxu Di (5):
  net/e1000: cleanup Tx buffers
  net/fm10k: cleanup Tx buffers
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |  1 +
 drivers/net/fm10k/fm10k.h         |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c  |  1 +
 drivers/net/fm10k/fm10k_rxtx.c    | 45 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 drivers/net/ice/ice_ethdev.c      |  1 +
 drivers/net/ice/ice_rxtx.c        | 41 ++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c    | 39 +++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |  2 ++
 14 files changed, 178 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 1/5] net/fm10k: cleanup Tx buffers
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
@ 2019-12-20  3:02   ` Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 2/5] net/i40e: " Chenxu Di
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the fm10k driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/fm10k/fm10k.h        |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c |  1 +
 drivers/net/fm10k/fm10k_rxtx.c   | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856ac..ddb1d64ec 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -342,6 +342,8 @@ uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 uint16_t fm10k_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t nb_pkts);
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 int fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq);
 int fm10k_rx_vec_condition_check(struct rte_eth_dev *);
 void fm10k_rx_queue_release_mbufs_vec(struct fm10k_rx_queue *rxq);
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 407baa16c..c389c79de 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -2897,6 +2897,7 @@ static const struct eth_dev_ops fm10k_eth_dev_ops = {
 	.reta_query		= fm10k_reta_query,
 	.rss_hash_update	= fm10k_rss_hash_update,
 	.rss_hash_conf_get	= fm10k_rss_hash_conf_get,
+	.tx_done_cleanup    = fm10k_tx_done_cleanup,
 };
 
 static int ftag_check_handler(__rte_unused const char *key,
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 5c3112183..f67c5bf00 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -541,6 +541,51 @@ static inline void tx_free_bulk_mbuf(struct rte_mbuf **txep, int num)
 	}
 }
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct fm10k_tx_queue *q = (struct fm10k_tx_queue *)txq;
+	uint16_t next_rs, count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	next_rs = fifo_peek(&q->rs_tracker);
+	if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE))
+		return count;
+
+	/* the DONE flag is set on this descriptor so remove the ID
+	 * from the RS bit tracker and free the buffers
+	 */
+	fifo_remove(&q->rs_tracker);
+
+	/* wrap around? if so, free buffers from last_free up to but NOT
+	 * including nb_desc
+	 */
+	if (q->last_free > next_rs) {
+		count = q->nb_desc - q->last_free;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free = 0;
+
+		if (unlikely(count == (int)free_cnt))
+			return count;
+	}
+
+	/* adjust free descriptor count before the next loop */
+	q->nb_free += count + (next_rs + 1 - q->last_free);
+
+	/* free buffers from last_free, up to and including next_rs */
+	if (q->last_free <= next_rs) {
+		count = next_rs - q->last_free + 1;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free += count;
+	}
+
+	if (q->last_free == q->nb_desc)
+		q->last_free = 0;
+
+	return count;
+}
+
 static inline void tx_free_descriptors(struct fm10k_tx_queue *q)
 {
 	uint16_t next_rs, count = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 2/5] net/i40e: cleanup Tx buffers
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 1/5] net/fm10k: " Chenxu Di
@ 2019-12-20  3:02   ` Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 3/5] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 4 files changed, 43 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..3280a3ff6 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,46 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+	struct i40e_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..8f11f011a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 3/5] net/ice: cleanup Tx buffers
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 1/5] net/fm10k: " Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 2/5] net/i40e: " Chenxu Di
@ 2019-12-20  3:02   ` Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 4/5] net/ixgbe: " Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the ice driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |  1 +
 drivers/net/ice/ice_rxtx.c   | 41 ++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |  1 +
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..154cc5e5f 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,47 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+	struct ice_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..8d4232a61 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -183,6 +183,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 4/5] net/ixgbe: cleanup Tx buffers
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 3/5] net/ice: " Chenxu Di
@ 2019-12-20  3:02   ` Chenxu Di
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Di ChenxuX

From: Di ChenxuX <chenxux.di@intel.com>

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c   | 39 ++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |  2 ++
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..0091405db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..4823a9cf1 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,45 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *q = (struct ixgbe_tx_queue *)txq;
+	struct ixgbe_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if (!(q->tx_ring[tx_id].wb.status &
+			IXGBE_TXD_STAT_DD))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..2c3770af6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v2 5/5] net/e1000: cleanup Tx buffers
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
                     ` (3 preceding siblings ...)
  2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 4/5] net/ixgbe: " Chenxu Di
@ 2019-12-20  3:02   ` Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:02 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 0/5] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (4 preceding siblings ...)
  2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
@ 2019-12-20  3:15 ` Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 1/5] net/fm10k: " Chenxu Di
                     ` (4 more replies)
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
                   ` (5 subsequent siblings)
  11 siblings, 5 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding fm10k, i40e, ice, ixgbe
 and igb vf for the API rte_eth_tx_done_cleanup to
 force free consumed buffers on Tx ring.

---
v2:
added code about igb vf.
v3:
changed infomation of author

Chenxu Di (5):
  net/fm10k: cleanup Tx buffers
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |  1 +
 drivers/net/fm10k/fm10k.h         |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c  |  1 +
 drivers/net/fm10k/fm10k_rxtx.c    | 45 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 drivers/net/ice/ice_ethdev.c      |  1 +
 drivers/net/ice/ice_rxtx.c        | 41 ++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c    | 39 +++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |  2 ++
 14 files changed, 178 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 1/5] net/fm10k: cleanup Tx buffers
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
@ 2019-12-20  3:15   ` Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 2/5] net/i40e: " Chenxu Di
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the fm10k driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/fm10k/fm10k.h        |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c |  1 +
 drivers/net/fm10k/fm10k_rxtx.c   | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856ac..ddb1d64ec 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -342,6 +342,8 @@ uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 uint16_t fm10k_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t nb_pkts);
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 int fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq);
 int fm10k_rx_vec_condition_check(struct rte_eth_dev *);
 void fm10k_rx_queue_release_mbufs_vec(struct fm10k_rx_queue *rxq);
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 407baa16c..c389c79de 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -2897,6 +2897,7 @@ static const struct eth_dev_ops fm10k_eth_dev_ops = {
 	.reta_query		= fm10k_reta_query,
 	.rss_hash_update	= fm10k_rss_hash_update,
 	.rss_hash_conf_get	= fm10k_rss_hash_conf_get,
+	.tx_done_cleanup    = fm10k_tx_done_cleanup,
 };
 
 static int ftag_check_handler(__rte_unused const char *key,
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 5c3112183..f67c5bf00 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -541,6 +541,51 @@ static inline void tx_free_bulk_mbuf(struct rte_mbuf **txep, int num)
 	}
 }
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct fm10k_tx_queue *q = (struct fm10k_tx_queue *)txq;
+	uint16_t next_rs, count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	next_rs = fifo_peek(&q->rs_tracker);
+	if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE))
+		return count;
+
+	/* the DONE flag is set on this descriptor so remove the ID
+	 * from the RS bit tracker and free the buffers
+	 */
+	fifo_remove(&q->rs_tracker);
+
+	/* wrap around? if so, free buffers from last_free up to but NOT
+	 * including nb_desc
+	 */
+	if (q->last_free > next_rs) {
+		count = q->nb_desc - q->last_free;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free = 0;
+
+		if (unlikely(count == (int)free_cnt))
+			return count;
+	}
+
+	/* adjust free descriptor count before the next loop */
+	q->nb_free += count + (next_rs + 1 - q->last_free);
+
+	/* free buffers from last_free, up to and including next_rs */
+	if (q->last_free <= next_rs) {
+		count = next_rs - q->last_free + 1;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free += count;
+	}
+
+	if (q->last_free == q->nb_desc)
+		q->last_free = 0;
+
+	return count;
+}
+
 static inline void tx_free_descriptors(struct fm10k_tx_queue *q)
 {
 	uint16_t next_rs, count = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 2/5] net/i40e: cleanup Tx buffers
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 1/5] net/fm10k: " Chenxu Di
@ 2019-12-20  3:15   ` Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 3/5] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |  1 +
 drivers/net/i40e/i40e_ethdev_vf.c |  1 +
 drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |  1 +
 4 files changed, 43 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..3280a3ff6 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,46 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+	struct i40e_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..8f11f011a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 3/5] net/ice: cleanup Tx buffers
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 1/5] net/fm10k: " Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 2/5] net/i40e: " Chenxu Di
@ 2019-12-20  3:15   ` Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 4/5] net/ixgbe: " Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |  1 +
 drivers/net/ice/ice_rxtx.c   | 41 ++++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |  1 +
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..154cc5e5f 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,47 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+	struct ice_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if ((q->tx_ring[tx_id].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+			rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..8d4232a61 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -183,6 +183,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 4/5] net/ixgbe: cleanup Tx buffers
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 3/5] net/ice: " Chenxu Di
@ 2019-12-20  3:15   ` Chenxu Di
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |  2 ++
 drivers/net/ixgbe/ixgbe_rxtx.c   | 39 ++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |  2 ++
 3 files changed, 43 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..0091405db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..4823a9cf1 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,45 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *q = (struct ixgbe_tx_queue *)txq;
+	struct ixgbe_tx_entry *sw_ring;
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_cleaned;
+
+	int count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	sw_ring = q->sw_ring;
+	tx_cleaned = q->last_desc_cleaned;
+	tx_id = sw_ring[q->last_desc_cleaned].next_id;
+	if (!(q->tx_ring[tx_id].wb.status &
+			IXGBE_TXD_STAT_DD))
+		return 0;
+
+	do {
+		if (sw_ring[tx_id].mbuf == NULL)
+			break;
+
+		rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+		sw_ring[tx_id].mbuf = NULL;
+		sw_ring[tx_id].last_id = tx_id;
+
+		/* Move to next segemnt. */
+		tx_cleaned = tx_id;
+		tx_id = sw_ring[tx_id].next_id;
+		count++;
+	} while (count != (int)free_cnt);
+
+	q->nb_tx_free += (uint16_t)count;
+	q->last_desc_cleaned = tx_cleaned;
+
+	return count;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..2c3770af6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v3 5/5] net/e1000: cleanup Tx buffers
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
                     ` (3 preceding siblings ...)
  2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 4/5] net/ixgbe: " Chenxu Di
@ 2019-12-20  3:15   ` Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-20  3:15 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 0/5] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (5 preceding siblings ...)
  2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
@ 2019-12-24  2:39 ` Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 1/5] net/fm10k: " Chenxu Di
                     ` (4 more replies)
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
                   ` (4 subsequent siblings)
  11 siblings, 5 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding fm10k, i40e, ice, ixgbe
 and igb vf for the API rte_eth_tx_done_cleanup to
 force free consumed buffers on Tx ring.

---
v2:
added code about igb vf.
v3:
changed information of author
v4:
changed code.

Chenxu Di (5):
  net/fm10k: cleanup Tx buffers
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |   1 +
 drivers/net/fm10k/fm10k.h         |   2 +
 drivers/net/fm10k/fm10k_ethdev.c  |   1 +
 drivers/net/fm10k/fm10k_rxtx.c    |  45 +++++++++++
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 122 +++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 drivers/net/ice/ice_ethdev.c      |   1 +
 drivers/net/ice/ice_rxtx.c        | 123 ++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |   1 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c    | 121 +++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |   2 +
 14 files changed, 424 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 1/5] net/fm10k: cleanup Tx buffers
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
@ 2019-12-24  2:39   ` Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 2/5] net/i40e: " Chenxu Di
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the fm10k driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/fm10k/fm10k.h        |  2 ++
 drivers/net/fm10k/fm10k_ethdev.c |  1 +
 drivers/net/fm10k/fm10k_rxtx.c   | 45 ++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856ac..ddb1d64ec 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -342,6 +342,8 @@ uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 uint16_t fm10k_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t nb_pkts);
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 int fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq);
 int fm10k_rx_vec_condition_check(struct rte_eth_dev *);
 void fm10k_rx_queue_release_mbufs_vec(struct fm10k_rx_queue *rxq);
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 407baa16c..c389c79de 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -2897,6 +2897,7 @@ static const struct eth_dev_ops fm10k_eth_dev_ops = {
 	.reta_query		= fm10k_reta_query,
 	.rss_hash_update	= fm10k_rss_hash_update,
 	.rss_hash_conf_get	= fm10k_rss_hash_conf_get,
+	.tx_done_cleanup    = fm10k_tx_done_cleanup,
 };
 
 static int ftag_check_handler(__rte_unused const char *key,
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 5c3112183..f67c5bf00 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -541,6 +541,51 @@ static inline void tx_free_bulk_mbuf(struct rte_mbuf **txep, int num)
 	}
 }
 
+int fm10k_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct fm10k_tx_queue *q = (struct fm10k_tx_queue *)txq;
+	uint16_t next_rs, count = 0;
+
+	if (q == NULL)
+		return -ENODEV;
+
+	next_rs = fifo_peek(&q->rs_tracker);
+	if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE))
+		return count;
+
+	/* the DONE flag is set on this descriptor so remove the ID
+	 * from the RS bit tracker and free the buffers
+	 */
+	fifo_remove(&q->rs_tracker);
+
+	/* wrap around? if so, free buffers from last_free up to but NOT
+	 * including nb_desc
+	 */
+	if (q->last_free > next_rs) {
+		count = q->nb_desc - q->last_free;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free = 0;
+
+		if (unlikely(count == (int)free_cnt))
+			return count;
+	}
+
+	/* adjust free descriptor count before the next loop */
+	q->nb_free += count + (next_rs + 1 - q->last_free);
+
+	/* free buffers from last_free, up to and including next_rs */
+	if (q->last_free <= next_rs) {
+		count = next_rs - q->last_free + 1;
+		tx_free_bulk_mbuf(&q->sw_ring[q->last_free], count);
+		q->last_free += count;
+	}
+
+	if (q->last_free == q->nb_desc)
+		q->last_free = 0;
+
+	return count;
+}
+
 static inline void tx_free_descriptors(struct fm10k_tx_queue *q)
 {
 	uint16_t next_rs, count = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 2/5] net/i40e: cleanup Tx buffers
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 1/5] net/fm10k: " Chenxu Di
@ 2019-12-24  2:39   ` Chenxu Di
  2019-12-26  8:24     ` Xing, Beilei
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 3/5] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 122 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 4 files changed, 125 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..e75733b8e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,128 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int i40e_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *txq = (struct i40e_tx_queue *)q;
+	struct i40e_tx_entry *sw_ring;
+	volatile struct i40e_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_first = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_first].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if ((txr[tx_last].cmd_type_offset_bsz &
+				rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
+				rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) {
+				/*
+				 * Increment the number of packets
+				 * freed.
+				 */
+				count++;
+
+				/* Get the start of the next packet. */
+				tx_next = sw_ring[tx_last].next_id;
+
+				/*
+				 * Loop through all segments in a
+				 * packet.
+				 */
+				do {
+					rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+					sw_ring[tx_id].mbuf = NULL;
+					sw_ring[tx_id].last_id = tx_id;
+
+					/* Move to next segemnt. */
+					tx_id = sw_ring[tx_id].next_id;
+
+				} while (tx_id != tx_next);
+
+				if (unlikely(count == (int)free_cnt))
+					break;
+			} else {
+				/*
+				 * mbuf still in use, nothing left to
+				 * free.
+				 */
+				break;
+			}
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segemnt. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..8f11f011a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 3/5] net/ice: cleanup Tx buffers
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 1/5] net/fm10k: " Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 2/5] net/i40e: " Chenxu Di
@ 2019-12-24  2:39   ` Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 4/5] net/ixgbe: " Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   1 +
 drivers/net/ice/ice_rxtx.c   | 123 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |   1 +
 3 files changed, 125 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..4aead98fd 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,129 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+int ice_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct ice_tx_queue *txq = (struct ice_tx_queue *)q;
+	struct ice_tx_entry *sw_ring;
+	volatile struct ice_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_first = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_first].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if ((txr[tx_last].cmd_type_offset_bsz &
+				rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+				rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) {
+				/*
+				 * Increment the number of packets
+				 * freed.
+				 */
+				count++;
+
+				/* Get the start of the next packet. */
+				tx_next = sw_ring[tx_last].next_id;
+
+				/*
+				 * Loop through all segments in a
+				 * packet.
+				 */
+				do {
+					rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+					sw_ring[tx_id].mbuf = NULL;
+					sw_ring[tx_id].last_id = tx_id;
+
+					/* Move to next segemnt. */
+					tx_id = sw_ring[tx_id].next_id;
+
+				} while (tx_id != tx_next);
+
+				if (unlikely(count == (int)free_cnt))
+					break;
+			} else {
+				/*
+				 * mbuf still in use, nothing left to
+				 * free.
+				 */
+				break;
+			}
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segemnt. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..8d4232a61 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -183,6 +183,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 4/5] net/ixgbe: cleanup Tx buffers
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 3/5] net/ice: " Chenxu Di
@ 2019-12-24  2:39   ` Chenxu Di
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 5/5] net/e1000: " Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c   | 121 +++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
 3 files changed, 125 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..0091405db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..0cd56d427 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,127 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q;
+	struct ixgbe_tx_entry *sw_ring;
+	volatile union ixgbe_adv_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_first = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_first].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if (txr[tx_last].wb.status &
+					IXGBE_TXD_STAT_DD) {
+				/*
+				 * Increment the number of packets
+				 * freed.
+				 */
+				count++;
+
+				/* Get the start of the next packet. */
+				tx_next = sw_ring[tx_last].next_id;
+
+				/*
+				 * Loop through all segments in a
+				 * packet.
+				 */
+				do {
+					rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+					sw_ring[tx_id].mbuf = NULL;
+					sw_ring[tx_id].last_id = tx_id;
+
+					/* Move to next segemnt. */
+					tx_id = sw_ring[tx_id].next_id;
+
+				} while (tx_id != tx_next);
+
+				if (unlikely(count == (int)free_cnt))
+					break;
+			} else {
+				/*
+				 * mbuf still in use, nothing left to
+				 * free.
+				 */
+				break;
+			}
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segemnt. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..2c3770af6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v4 5/5] net/e1000: cleanup Tx buffers
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
                     ` (3 preceding siblings ...)
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 4/5] net/ixgbe: " Chenxu Di
@ 2019-12-24  2:39   ` Chenxu Di
  4 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-24  2:39 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/5] net/i40e: cleanup Tx buffers
  2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 2/5] net/i40e: " Chenxu Di
@ 2019-12-26  8:24     ` Xing, Beilei
  0 siblings, 0 replies; 74+ messages in thread
From: Xing, Beilei @ 2019-12-26  8:24 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chenxu Di
> Sent: Tuesday, December 24, 2019 10:39 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v4 2/5] net/i40e: cleanup Tx buffers
> 
> Add support to the i40e driver for the API rte_eth_tx_done_cleanup to force
> free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/i40e/i40e_ethdev.c    |   1 +
>  drivers/net/i40e/i40e_ethdev_vf.c |   1 +
>  drivers/net/i40e/i40e_rxtx.c      | 122 ++++++++++++++++++++++++++++++
>  drivers/net/i40e/i40e_rxtx.h      |   1 +
>  4 files changed, 125 insertions(+)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 5999c964b..fad47a942 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
>  	.mac_addr_set                 = i40e_set_default_mac_addr,
>  	.mtu_set                      = i40e_dev_mtu_set,
>  	.tm_ops_get                   = i40e_tm_ops_get,
> +	.tx_done_cleanup              = i40e_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */ diff --git
> a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
> index 5dba0928b..0ca5417d7 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
>  	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
>  	.mtu_set              = i40evf_dev_mtu_set,
>  	.mac_addr_set         = i40evf_set_default_mac_addr,
> +	.tx_done_cleanup      = i40e_tx_done_cleanup,
>  };
> 
>  /*
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 17dc8c78f..e75733b8e 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2455,6 +2455,128 @@ i40e_tx_queue_release_mbufs(struct
> i40e_tx_queue *txq)
>  	}
>  }
> 
> +int i40e_tx_done_cleanup(void *q, uint32_t free_cnt) {
> +	struct i40e_tx_queue *txq = (struct i40e_tx_queue *)q;
> +	struct i40e_tx_entry *sw_ring;
> +	volatile struct i40e_tx_desc *txr;
> +	uint16_t tx_first; /* First segment analyzed. */
> +	uint16_t tx_id;    /* Current segment being processed. */
> +	uint16_t tx_last;  /* Last segment in the current packet. */
> +	uint16_t tx_next;  /* First segment of the next packet. */
> +	int count;
> +
> +	if (txq == NULL)
> +		return -ENODEV;
> +
> +	count = 0;
> +	sw_ring = txq->sw_ring;
> +	txr = txq->tx_ring;
> +
> +	/*
> +	 * tx_tail is the last sent packet on the sw_ring. Goto the end
> +	 * of that packet (the last segment in the packet chain) and
> +	 * then the next segment will be the start of the oldest segment
> +	 * in the sw_ring. This is the first packet that will be
> +	 * attempted to be freed.
> +	 */
> +
> +	/* Get last segment in most recently added packet. */
> +	tx_first = sw_ring[txq->tx_tail].last_id;

Should be tx_last more readable here? And then tx_first = sw_ring[tx_last].next_id?

> +
> +	/* Get the next segment, which is the oldest segment in ring. */
> +	tx_first = sw_ring[tx_first].next_id;
> +
> +	/* Set the current index to the first. */
> +	tx_id = tx_first;
> +
> +	/*
> +	 * Loop through each packet. For each packet, verify that an
> +	 * mbuf exists and that the last segment is free. If so, free
> +	 * it and move on.
> +	 */
> +	while (1) {
> +		tx_last = sw_ring[tx_id].last_id;
> +
> +		if (sw_ring[tx_last].mbuf) {
> +			if ((txr[tx_last].cmd_type_offset_bsz &
> +
> 	rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=

Is the logic reversed? '!=' means the mbuf is still in use.

> +
> 	rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) {
> +				/*
> +				 * Increment the number of packets
> +				 * freed.
> +				 */
> +				count++;

Should 'count++' be after free mbuf?

> +
> +				/* Get the start of the next packet. */
> +				tx_next = sw_ring[tx_last].next_id;
> +
> +				/*
> +				 * Loop through all segments in a
> +				 * packet.
> +				 */
> +				do {
> +
> 	rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> +					sw_ring[tx_id].mbuf = NULL;
> +					sw_ring[tx_id].last_id = tx_id;
> +
> +					/* Move to next segemnt. */

Typo: segemnt -> segment

> +					tx_id = sw_ring[tx_id].next_id;
> +
> +				} while (tx_id != tx_next);
> +
> +				if (unlikely(count == (int)free_cnt))
> +					break;
> +			} else {
> +				/*
> +				 * mbuf still in use, nothing left to
> +				 * free.
> +				 */
> +				break;
> +			}
> +		} else {
> +			/*
> +			 * There are multiple reasons to be here:
> +			 * 1) All the packets on the ring have been
> +			 *    freed - tx_id is equal to tx_first
> +			 *    and some packets have been freed.
> +			 *    - Done, exit
> +			 * 2) Interfaces has not sent a rings worth of
> +			 *    packets yet, so the segment after tail is
> +			 *    still empty. Or a previous call to this
> +			 *    function freed some of the segments but
> +			 *    not all so there is a hole in the list.
> +			 *    Hopefully this is a rare case.
> +			 *    - Walk the list and find the next mbuf. If
> +			 *      there isn't one, then done.
> +			 */
> +			if (likely(tx_id == tx_first && count != 0))
> +				break;
> +
> +			/*
> +			 * Walk the list and find the next mbuf, if any.
> +			 */
> +			do {
> +				/* Move to next segemnt. */

Typo: segemnt -> segment

> +				tx_id = sw_ring[tx_id].next_id;
> +
> +				if (sw_ring[tx_id].mbuf)
> +					break;
> +
> +			} while (tx_id != tx_first);
> +
> +			/*
> +			 * Determine why previous loop bailed. If there
> +			 * is not an mbuf, done.
> +			 */
> +			if (sw_ring[tx_id].mbuf == NULL)
> +				break;
> +		}
> +	}
> +
> +	return count;
> +}
> +
>  void
>  i40e_reset_tx_queue(struct i40e_tx_queue *txq)  { diff --git
> a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index
> 2106bb355..8f11f011a 100644
> --- a/drivers/net/i40e/i40e_rxtx.h
> +++ b/drivers/net/i40e/i40e_rxtx.h
> @@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
> void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);  void
> i40e_reset_tx_queue(struct i40e_tx_queue *txq);  void
> i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
> +int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
>  int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);  void
> i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v6 0/4] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (6 preceding siblings ...)
  2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
@ 2019-12-30  9:38 ` Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 1/4] net/i40e: " Chenxu Di
                     ` (3 more replies)
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
                   ` (3 subsequent siblings)
  11 siblings, 4 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-30  9:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding i40e, ice, ixgbe 
and igb vf for the API rte_eth_tx_done_cleanup to force
 free consumed buffers on Tx ring.

---
v2:
added code about igb vf.
v3:
changed information of author
v4:
changed code.
v5:
fixed code and notes.
removed code for fm10k.
v6:
fixed checkpatch warnings

Reviewed-by: Xiao Zhang <xiao.zhang@intel.com>

Chenxu Di (4):
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 121 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 drivers/net/ice/ice_ethdev.c      |   1 +
 drivers/net/ice/ice_rxtx.c        | 118 +++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |   1 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c    | 116 ++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |   2 +
 11 files changed, 365 insertions(+)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v6 1/4] net/i40e: cleanup Tx buffers
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
@ 2019-12-30  9:38   ` Chenxu Di
  2019-12-30 13:01     ` Ananyev, Konstantin
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 2/4] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2019-12-30  9:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 121 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 4 files changed, 124 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..883419bd7 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,127 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int i40e_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *txq = (struct i40e_tx_queue *)q;
+	struct i40e_tx_entry *sw_ring;
+	volatile struct i40e_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_last = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_last].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if ((txr[tx_last].cmd_type_offset_bsz &
+			    rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
+			    rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
+				/*
+				 * mbuf still in use, nothing left to
+				 * free.
+				 */
+				break;
+
+			/* Get the start of the next packet. */
+			tx_next = sw_ring[tx_last].next_id;
+
+			/*
+			 * Loop through all segments in a
+			 * packet.
+			 */
+			do {
+				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+				sw_ring[tx_id].mbuf = NULL;
+				sw_ring[tx_id].last_id = tx_id;
+
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+			} while (tx_id != tx_next);
+
+			/*
+			 * Increment the number of packets
+			 * freed.
+			 */
+			count++;
+
+			if (unlikely(count == (int)free_cnt))
+				break;
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..8f11f011a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v6 2/4] net/ice: cleanup Tx buffers
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 1/4] net/i40e: " Chenxu Di
@ 2019-12-30  9:38   ` Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 3/4] net/ixgbe: " Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 4/4] net/e1000: " Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-30  9:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   1 +
 drivers/net/ice/ice_rxtx.c   | 118 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |   1 +
 3 files changed, 120 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..8f4654cba 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,124 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+int ice_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct ice_tx_queue *txq = (struct ice_tx_queue *)q;
+	struct ice_tx_entry *sw_ring;
+	volatile struct ice_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_last = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_last].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if ((txr[tx_last].cmd_type_offset_bsz &
+			    rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) !=
+			    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))
+				break;
+
+			/* Get the start of the next packet. */
+			tx_next = sw_ring[tx_last].next_id;
+
+			/*
+			 * Loop through all segments in a
+			 * packet.
+			 */
+			do {
+				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+				sw_ring[tx_id].mbuf = NULL;
+				sw_ring[tx_id].last_id = tx_id;
+
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+			} while (tx_id != tx_next);
+
+			/*
+			 * Increment the number of packets
+			 * freed.
+			 */
+			count++;
+
+			if (unlikely(count == (int)free_cnt))
+				break;
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..8d4232a61 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -183,6 +183,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 1/4] net/i40e: " Chenxu Di
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 2/4] net/ice: " Chenxu Di
@ 2019-12-30  9:38   ` Chenxu Di
  2019-12-30 12:53     ` Ananyev, Konstantin
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 4/4] net/e1000: " Chenxu Di
  3 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2019-12-30  9:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c   | 116 +++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
 3 files changed, 120 insertions(+)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..0091405db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..520b9c756 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q;
+	struct ixgbe_tx_entry *sw_ring;
+	volatile union ixgbe_adv_tx_desc *txr;
+	uint16_t tx_first; /* First segment analyzed. */
+	uint16_t tx_id;    /* Current segment being processed. */
+	uint16_t tx_last;  /* Last segment in the current packet. */
+	uint16_t tx_next;  /* First segment of the next packet. */
+	int count;
+
+	if (txq == NULL)
+		return -ENODEV;
+
+	count = 0;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+
+	/*
+	 * tx_tail is the last sent packet on the sw_ring. Goto the end
+	 * of that packet (the last segment in the packet chain) and
+	 * then the next segment will be the start of the oldest segment
+	 * in the sw_ring. This is the first packet that will be
+	 * attempted to be freed.
+	 */
+
+	/* Get last segment in most recently added packet. */
+	tx_last = sw_ring[txq->tx_tail].last_id;
+
+	/* Get the next segment, which is the oldest segment in ring. */
+	tx_first = sw_ring[tx_last].next_id;
+
+	/* Set the current index to the first. */
+	tx_id = tx_first;
+
+	/*
+	 * Loop through each packet. For each packet, verify that an
+	 * mbuf exists and that the last segment is free. If so, free
+	 * it and move on.
+	 */
+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if (!(txr[tx_last].wb.status &
+				IXGBE_TXD_STAT_DD))
+				break;
+
+			/* Get the start of the next packet. */
+			tx_next = sw_ring[tx_last].next_id;
+
+			/*
+			 * Loop through all segments in a
+			 * packet.
+			 */
+			do {
+				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
+				sw_ring[tx_id].mbuf = NULL;
+				sw_ring[tx_id].last_id = tx_id;
+
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+			} while (tx_id != tx_next);
+
+			/*
+			 * Increment the number of packets
+			 * freed.
+			 */
+			count++;
+
+			if (unlikely(count == (int)free_cnt))
+				break;
+		} else {
+			/*
+			 * There are multiple reasons to be here:
+			 * 1) All the packets on the ring have been
+			 *    freed - tx_id is equal to tx_first
+			 *    and some packets have been freed.
+			 *    - Done, exit
+			 * 2) Interfaces has not sent a rings worth of
+			 *    packets yet, so the segment after tail is
+			 *    still empty. Or a previous call to this
+			 *    function freed some of the segments but
+			 *    not all so there is a hole in the list.
+			 *    Hopefully this is a rare case.
+			 *    - Walk the list and find the next mbuf. If
+			 *      there isn't one, then done.
+			 */
+			if (likely(tx_id == tx_first && count != 0))
+				break;
+
+			/*
+			 * Walk the list and find the next mbuf, if any.
+			 */
+			do {
+				/* Move to next segment. */
+				tx_id = sw_ring[tx_id].next_id;
+
+				if (sw_ring[tx_id].mbuf)
+					break;
+
+			} while (tx_id != tx_first);
+
+			/*
+			 * Determine why previous loop bailed. If there
+			 * is not an mbuf, done.
+			 */
+			if (sw_ring[tx_id].mbuf == NULL)
+				break;
+		}
+	}
+
+	return count;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..2c3770af6 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v6 4/4] net/e1000: cleanup Tx buffers
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 3/4] net/ixgbe: " Chenxu Di
@ 2019-12-30  9:38   ` Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2019-12-30  9:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 3/4] net/ixgbe: " Chenxu Di
@ 2019-12-30 12:53     ` Ananyev, Konstantin
  2020-01-03  9:01       ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ananyev, Konstantin @ 2019-12-30 12:53 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX

Hi,

> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
>  drivers/net/ixgbe/ixgbe_rxtx.c   | 116 +++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
>  3 files changed, 120 insertions(+)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 2c6fd0f13..0091405db 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
>  	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
>  	.tm_ops_get           = ixgbe_tm_ops_get,
> +	.tx_done_cleanup      = ixgbe_tx_done_cleanup,

Don't see how we can have one tx_done_cleanup() for different tx functions?
Vector and scalar TX path use different  format for sw_ring[] entries.
Also offload and simile TX paths use different method to track used/free descriptors,
and use different functions to free them:
offload uses tx_entry next_id, last_id plus txq. last_desc_cleaned, while
simple TX paths use tx_next_dd. 


>  };
> 
>  /*
> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
>  	.reta_query           = ixgbe_dev_rss_reta_query,
>  	.rss_hash_update      = ixgbe_dev_rss_hash_update,
>  	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> +	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index fa572d184..520b9c756 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
>  	}
>  }
> 
> +int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)

That seems to work only for offload(full) TX path (ixgbe_xmit_pkts).
Simple(fast) path seems not covered by this function.

> +{
> +	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q;
> +	struct ixgbe_tx_entry *sw_ring;
> +	volatile union ixgbe_adv_tx_desc *txr;
> +	uint16_t tx_first; /* First segment analyzed. */
> +	uint16_t tx_id;    /* Current segment being processed. */
> +	uint16_t tx_last;  /* Last segment in the current packet. */
> +	uint16_t tx_next;  /* First segment of the next packet. */
> +	int count;
> +
> +	if (txq == NULL)
> +		return -ENODEV;
> +
> +	count = 0;
> +	sw_ring = txq->sw_ring;
> +	txr = txq->tx_ring;
> +
> +	/*
> +	 * tx_tail is the last sent packet on the sw_ring. Goto the end
> +	 * of that packet (the last segment in the packet chain) and
> +	 * then the next segment will be the start of the oldest segment
> +	 * in the sw_ring. 

Not sure I understand the sentence above.
tx_tail is the value of TDT HW register (most recently armed by SW TD).
last_id  is the index of last descriptor for multi-seg packet.
next_id is just the index of next descriptor in HW TD ring.
How do you conclude that it will be the ' oldest segment in the sw_ring'?

Another question why do you need to write your own functions?
Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload) path
and  ixgbe_tx_free_bufs() for simple path?
Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could be used
to determine finished TX descriptors.
Based on that you can you can free appropriate sw_ring[] entries.

>This is the first packet that will be
> +	 * attempted to be freed.
> +	 */
> +
> +	/* Get last segment in most recently added packet. */
> +	tx_last = sw_ring[txq->tx_tail].last_id;
> +
> +	/* Get the next segment, which is the oldest segment in ring. */
> +	tx_first = sw_ring[tx_last].next_id;
> +
> +	/* Set the current index to the first. */
> +	tx_id = tx_first;
> +
> +	/*
> +	 * Loop through each packet. For each packet, verify that an
> +	 * mbuf exists and that the last segment is free. If so, free
> +	 * it and move on.
> +	 */
> +	while (1) {
> +		tx_last = sw_ring[tx_id].last_id;
> +
> +		if (sw_ring[tx_last].mbuf) {
> +			if (!(txr[tx_last].wb.status &
> +				IXGBE_TXD_STAT_DD))
> +				break;
> +
> +			/* Get the start of the next packet. */
> +			tx_next = sw_ring[tx_last].next_id;
> +
> +			/*
> +			 * Loop through all segments in a
> +			 * packet.
> +			 */
> +			do {
> +				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> +				sw_ring[tx_id].mbuf = NULL;
> +				sw_ring[tx_id].last_id = tx_id;
> +
> +				/* Move to next segment. */
> +				tx_id = sw_ring[tx_id].next_id;
> +
> +			} while (tx_id != tx_next);
> +
> +			/*
> +			 * Increment the number of packets
> +			 * freed.
> +			 */
> +			count++;
> +
> +			if (unlikely(count == (int)free_cnt))
> +				break;
> +		} else {
> +			/*
> +			 * There are multiple reasons to be here:
> +			 * 1) All the packets on the ring have been
> +			 *    freed - tx_id is equal to tx_first
> +			 *    and some packets have been freed.
> +			 *    - Done, exit
> +			 * 2) Interfaces has not sent a rings worth of
> +			 *    packets yet, so the segment after tail is
> +			 *    still empty. Or a previous call to this
> +			 *    function freed some of the segments but
> +			 *    not all so there is a hole in the list.
> +			 *    Hopefully this is a rare case.
> +			 *    - Walk the list and find the next mbuf. If
> +			 *      there isn't one, then done.
> +			 */
> +			if (likely(tx_id == tx_first && count != 0))
> +				break;
> +
> +			/*
> +			 * Walk the list and find the next mbuf, if any.
> +			 */
> +			do {
> +				/* Move to next segment. */
> +				tx_id = sw_ring[tx_id].next_id;
> +
> +				if (sw_ring[tx_id].mbuf)
> +					break;
> +
> +			} while (tx_id != tx_first);
> +
> +			/*
> +			 * Determine why previous loop bailed. If there
> +			 * is not an mbuf, done.
> +			 */
> +			if (sw_ring[tx_id].mbuf == NULL)
> +				break;
> +		}
> +	}
> +
> +	return count;
> +}
> +
>  static void __attribute__((cold))
>  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
>  {
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 505d344b9..2c3770af6 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
>  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
>  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
> 
> +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> +
>  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
>  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/4] net/i40e: cleanup Tx buffers
  2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 1/4] net/i40e: " Chenxu Di
@ 2019-12-30 13:01     ` Ananyev, Konstantin
  0 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2019-12-30 13:01 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX



> 
> Add support to the i40e driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/i40e/i40e_ethdev.c    |   1 +
>  drivers/net/i40e/i40e_ethdev_vf.c |   1 +
>  drivers/net/i40e/i40e_rxtx.c      | 121 ++++++++++++++++++++++++++++++
>  drivers/net/i40e/i40e_rxtx.h      |   1 +
>  4 files changed, 124 insertions(+)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 5999c964b..fad47a942 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
>  	.mac_addr_set                 = i40e_set_default_mac_addr,
>  	.mtu_set                      = i40e_dev_mtu_set,
>  	.tm_ops_get                   = i40e_tm_ops_get,
> +	.tx_done_cleanup              = i40e_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
> index 5dba0928b..0ca5417d7 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
>  	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
>  	.mtu_set              = i40evf_dev_mtu_set,
>  	.mac_addr_set         = i40evf_set_default_mac_addr,
> +	.tx_done_cleanup      = i40e_tx_done_cleanup,
>  };
> 
>  /*
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 17dc8c78f..883419bd7 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2455,6 +2455,127 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
>  	}
>  }
> 
> +int i40e_tx_done_cleanup(void *q, uint32_t free_cnt)
> +{
> +	struct i40e_tx_queue *txq = (struct i40e_tx_queue *)q;
> +	struct i40e_tx_entry *sw_ring;
> +	volatile struct i40e_tx_desc *txr;
> +	uint16_t tx_first; /* First segment analyzed. */
> +	uint16_t tx_id;    /* Current segment being processed. */
> +	uint16_t tx_last;  /* Last segment in the current packet. */
> +	uint16_t tx_next;  /* First segment of the next packet. */
> +	int count;
> +
> +	if (txq == NULL)
> +		return -ENODEV;
> +
> +	count = 0;
> +	sw_ring = txq->sw_ring;
> +	txr = txq->tx_ring;
> +
> +	/*
> +	 * tx_tail is the last sent packet on the sw_ring. Goto the end
> +	 * of that packet (the last segment in the packet chain) and
> +	 * then the next segment will be the start of the oldest segment
> +	 * in the sw_ring. This is the first packet that will be
> +	 * attempted to be freed.
> +	 */

Pretty much same comments as for ixgbe.

> +
> +	/* Get last segment in most recently added packet. */
> +	tx_last = sw_ring[txq->tx_tail].last_id;
> +
> +	/* Get the next segment, which is the oldest segment in ring. */
> +	tx_first = sw_ring[tx_last].next_id;
> +
> +	/* Set the current index to the first. */
> +	tx_id = tx_first;
> +
> +	/*
> +	 * Loop through each packet. For each packet, verify that an
> +	 * mbuf exists and that the last segment is free. If so, free
> +	 * it and move on.
> +	 */
> +	while (1) {
> +		tx_last = sw_ring[tx_id].last_id;
> +
> +		if (sw_ring[tx_last].mbuf) {
> +			if ((txr[tx_last].cmd_type_offset_bsz &
> +			    rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> +			    rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> +				/*
> +				 * mbuf still in use, nothing left to
> +				 * free.
> +				 */
> +				break;
> +
> +			/* Get the start of the next packet. */
> +			tx_next = sw_ring[tx_last].next_id;
> +
> +			/*
> +			 * Loop through all segments in a
> +			 * packet.
> +			 */
> +			do {
> +				rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> +				sw_ring[tx_id].mbuf = NULL;
> +				sw_ring[tx_id].last_id = tx_id;
> +
> +				/* Move to next segment. */
> +				tx_id = sw_ring[tx_id].next_id;
> +
> +			} while (tx_id != tx_next);
> +
> +			/*
> +			 * Increment the number of packets
> +			 * freed.
> +			 */
> +			count++;
> +
> +			if (unlikely(count == (int)free_cnt))
> +				break;
> +		} else {
> +			/*
> +			 * There are multiple reasons to be here:
> +			 * 1) All the packets on the ring have been
> +			 *    freed - tx_id is equal to tx_first
> +			 *    and some packets have been freed.
> +			 *    - Done, exit
> +			 * 2) Interfaces has not sent a rings worth of
> +			 *    packets yet, so the segment after tail is
> +			 *    still empty. Or a previous call to this
> +			 *    function freed some of the segments but
> +			 *    not all so there is a hole in the list.
> +			 *    Hopefully this is a rare case.
> +			 *    - Walk the list and find the next mbuf. If
> +			 *      there isn't one, then done.
> +			 */
> +			if (likely(tx_id == tx_first && count != 0))
> +				break;
> +
> +			/*
> +			 * Walk the list and find the next mbuf, if any.
> +			 */
> +			do {
> +				/* Move to next segment. */
> +				tx_id = sw_ring[tx_id].next_id;
> +
> +				if (sw_ring[tx_id].mbuf)
> +					break;
> +
> +			} while (tx_id != tx_first);
> +
> +			/*
> +			 * Determine why previous loop bailed. If there
> +			 * is not an mbuf, done.
> +			 */
> +			if (sw_ring[tx_id].mbuf == NULL)
> +				break;
> +		}
> +	}
> +
> +	return count;
> +}
> +
>  void
>  i40e_reset_tx_queue(struct i40e_tx_queue *txq)
>  {
> diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
> index 2106bb355..8f11f011a 100644
> --- a/drivers/net/i40e/i40e_rxtx.h
> +++ b/drivers/net/i40e/i40e_rxtx.h
> @@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
>  void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
>  void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
>  void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
> +int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
>  int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
>  void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2019-12-30 12:53     ` Ananyev, Konstantin
@ 2020-01-03  9:01       ` Di, ChenxuX
  2020-01-05 23:36         ` Ananyev, Konstantin
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-03  9:01 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Yang, Qiming

Hi,


> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, December 30, 2019 8:54 PM
> To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
> 
> Hi,
> 
> > Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup to
> > force free consumed buffers on Tx ring.
> >
> > Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
> >  drivers/net/ixgbe/ixgbe_rxtx.c   | 116 +++++++++++++++++++++++++++++++
> >  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
> >  3 files changed, 120 insertions(+)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 2c6fd0f13..0091405db 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops
> > = {  .udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
> > .udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
> >  .tm_ops_get           = ixgbe_tm_ops_get,
> > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> 
> Don't see how we can have one tx_done_cleanup() for different tx functions?
> Vector and scalar TX path use different  format for sw_ring[] entries.
> Also offload and simile TX paths use different method to track used/free
> descriptors, and use different functions to free them:
> offload uses tx_entry next_id, last_id plus txq. last_desc_cleaned, while simple
> TX paths use tx_next_dd.
> 

This patches will be not include function for Vector, and I will update my code to
Make it work for offload and simple .
> 
> >  };
> >
> >  /*
> > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops
> = {
> >  .reta_query           = ixgbe_dev_rss_reta_query,
> >  .rss_hash_update      = ixgbe_dev_rss_hash_update,
> >  .rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> >  };
> >
> >  /* store statistics names and its offset in stats structure */ diff
> > --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..520b9c756 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct
> > ixgbe_tx_queue *txq)  }  }
> >
> > +int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)
> 
> That seems to work only for offload(full) TX path (ixgbe_xmit_pkts).
> Simple(fast) path seems not covered by this function.
> 

Same as above

> > +{
> > +struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q; struct
> > +ixgbe_tx_entry *sw_ring; volatile union ixgbe_adv_tx_desc *txr;
> > +uint16_t tx_first; /* First segment analyzed. */
> > +uint16_t tx_id;    /* Current segment being processed. */
> > +uint16_t tx_last;  /* Last segment in the current packet. */ uint16_t
> > +tx_next;  /* First segment of the next packet. */ int count;
> > +
> > +if (txq == NULL)
> > +return -ENODEV;
> > +
> > +count = 0;
> > +sw_ring = txq->sw_ring;
> > +txr = txq->tx_ring;
> > +
> > +/*
> > + * tx_tail is the last sent packet on the sw_ring. Goto the end
> > + * of that packet (the last segment in the packet chain) and
> > + * then the next segment will be the start of the oldest segment
> > + * in the sw_ring.
> 
> Not sure I understand the sentence above.
> tx_tail is the value of TDT HW register (most recently armed by SW TD).
> last_id  is the index of last descriptor for multi-seg packet.
> next_id is just the index of next descriptor in HW TD ring.
> How do you conclude that it will be the ' oldest segment in the sw_ring'?
> 

The tx_tail is the last sent packet on the sw_ring. While the xmit_cleanup or 
Tx_free_bufs will be call when the nb_tx_free < tx_free_thresh .
So the sw_ring[tx_tail].next_id must be the begin of mbufs which are not used or
 Already freed . then begin the loop until the mbuf is used and begin to free them.



> Another question why do you need to write your own functions?
> Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload) path and
> ixgbe_tx_free_bufs() for simple path?
> Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could be used to
> determine finished TX descriptors.
> Based on that you can you can free appropriate sw_ring[] entries.
> 

The reason why I don't reuse existing function is that they all free several mbufs 
While the free_cnt of the API rte_eth_tx_done_cleanup() is the number of packets.
It also need to be done that check which mbuffs are from the same packet.


> >This is the first packet that will be
> > + * attempted to be freed.
> > + */
> > +
> > +/* Get last segment in most recently added packet. */ tx_last =
> > +sw_ring[txq->tx_tail].last_id;
> > +
> > +/* Get the next segment, which is the oldest segment in ring. */
> > +tx_first = sw_ring[tx_last].next_id;
> > +
> > +/* Set the current index to the first. */ tx_id = tx_first;
> > +
> > +/*
> > + * Loop through each packet. For each packet, verify that an
> > + * mbuf exists and that the last segment is free. If so, free
> > + * it and move on.
> > + */
> > +while (1) {
> > +tx_last = sw_ring[tx_id].last_id;
> > +
> > +if (sw_ring[tx_last].mbuf) {
> > +if (!(txr[tx_last].wb.status &
> > +IXGBE_TXD_STAT_DD))
> > +break;
> > +
> > +/* Get the start of the next packet. */ tx_next =
> > +sw_ring[tx_last].next_id;
> > +
> > +/*
> > + * Loop through all segments in a
> > + * packet.
> > + */
> > +do {
> > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > +sw_ring[tx_id].mbuf = NULL;
> > +sw_ring[tx_id].last_id = tx_id;
> > +
> > +/* Move to next segment. */
> > +tx_id = sw_ring[tx_id].next_id;
> > +
> > +} while (tx_id != tx_next);
> > +
> > +/*
> > + * Increment the number of packets
> > + * freed.
> > + */
> > +count++;
> > +
> > +if (unlikely(count == (int)free_cnt)) break; } else {
> > +/*
> > + * There are multiple reasons to be here:
> > + * 1) All the packets on the ring have been
> > + *    freed - tx_id is equal to tx_first
> > + *    and some packets have been freed.
> > + *    - Done, exit
> > + * 2) Interfaces has not sent a rings worth of
> > + *    packets yet, so the segment after tail is
> > + *    still empty. Or a previous call to this
> > + *    function freed some of the segments but
> > + *    not all so there is a hole in the list.
> > + *    Hopefully this is a rare case.
> > + *    - Walk the list and find the next mbuf. If
> > + *      there isn't one, then done.
> > + */
> > +if (likely(tx_id == tx_first && count != 0)) break;
> > +
> > +/*
> > + * Walk the list and find the next mbuf, if any.
> > + */
> > +do {
> > +/* Move to next segment. */
> > +tx_id = sw_ring[tx_id].next_id;
> > +
> > +if (sw_ring[tx_id].mbuf)
> > +break;
> > +
> > +} while (tx_id != tx_first);
> > +
> > +/*
> > + * Determine why previous loop bailed. If there
> > + * is not an mbuf, done.
> > + */
> > +if (sw_ring[tx_id].mbuf == NULL)
> > +break;
> > +}
> > +}
> > +
> > +return count;
> > +}
> > +
> >  static void __attribute__((cold))
> >  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff --git
> > a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> > index 505d344b9..2c3770af6 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > @@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct
> > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue
> > *rxq);  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue
> > *rxq);
> >
> > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > +
> >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> >
> > --
> > 2.17.1
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-03  9:01       ` Di, ChenxuX
@ 2020-01-05 23:36         ` Ananyev, Konstantin
  2020-01-06  9:03           ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-05 23:36 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming


> > > Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup to
> > > force free consumed buffers on Tx ring.
> > >
> > > Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> > > ---
> > >  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
> > >  drivers/net/ixgbe/ixgbe_rxtx.c   | 116 +++++++++++++++++++++++++++++++
> > >  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +
> > >  3 files changed, 120 insertions(+)
> > >
> > > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > > index 2c6fd0f13..0091405db 100644
> > > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops
> > > = {  .udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
> > > .udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
> > >  .tm_ops_get           = ixgbe_tm_ops_get,
> > > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> >
> > Don't see how we can have one tx_done_cleanup() for different tx functions?
> > Vector and scalar TX path use different  format for sw_ring[] entries.
> > Also offload and simile TX paths use different method to track used/free
> > descriptors, and use different functions to free them:
> > offload uses tx_entry next_id, last_id plus txq. last_desc_cleaned, while simple
> > TX paths use tx_next_dd.
> >
> 
> This patches will be not include function for Vector, and I will update my code to
> Make it work for offload and simple .
> >
> > >  };
> > >
> > >  /*
> > > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops
> > = {
> > >  .reta_query           = ixgbe_dev_rss_reta_query,
> > >  .rss_hash_update      = ixgbe_dev_rss_hash_update,
> > >  .rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> > > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> > >  };
> > >
> > >  /* store statistics names and its offset in stats structure */ diff
> > > --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..520b9c756 100644
> > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > > @@ -2306,6 +2306,122 @@ ixgbe_tx_queue_release_mbufs(struct
> > > ixgbe_tx_queue *txq)  }  }
> > >
> > > +int ixgbe_tx_done_cleanup(void *q, uint32_t free_cnt)
> >
> > That seems to work only for offload(full) TX path (ixgbe_xmit_pkts).
> > Simple(fast) path seems not covered by this function.
> >
> 
> Same as above
> 
> > > +{
> > > +struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)q; struct
> > > +ixgbe_tx_entry *sw_ring; volatile union ixgbe_adv_tx_desc *txr;
> > > +uint16_t tx_first; /* First segment analyzed. */
> > > +uint16_t tx_id;    /* Current segment being processed. */
> > > +uint16_t tx_last;  /* Last segment in the current packet. */ uint16_t
> > > +tx_next;  /* First segment of the next packet. */ int count;
> > > +
> > > +if (txq == NULL)
> > > +return -ENODEV;
> > > +
> > > +count = 0;
> > > +sw_ring = txq->sw_ring;
> > > +txr = txq->tx_ring;
> > > +
> > > +/*
> > > + * tx_tail is the last sent packet on the sw_ring. Goto the end
> > > + * of that packet (the last segment in the packet chain) and
> > > + * then the next segment will be the start of the oldest segment
> > > + * in the sw_ring.
> >
> > Not sure I understand the sentence above.
> > tx_tail is the value of TDT HW register (most recently armed by SW TD).
> > last_id  is the index of last descriptor for multi-seg packet.
> > next_id is just the index of next descriptor in HW TD ring.
> > How do you conclude that it will be the ' oldest segment in the sw_ring'?
> >
> 
> The tx_tail is the last sent packet on the sw_ring. While the xmit_cleanup or
> Tx_free_bufs will be call when the nb_tx_free < tx_free_thresh .
> So the sw_ring[tx_tail].next_id must be the begin of mbufs which are not used or
>  Already freed . then begin the loop until the mbuf is used and begin to free them.
> 
> 
> 
> > Another question why do you need to write your own functions?
> > Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload) path and
> > ixgbe_tx_free_bufs() for simple path?
> > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could be used to
> > determine finished TX descriptors.
> > Based on that you can you can free appropriate sw_ring[] entries.
> >
> 
> The reason why I don't reuse existing function is that they all free several mbufs
> While the free_cnt of the API rte_eth_tx_done_cleanup() is the number of packets.
> It also need to be done that check which mbuffs are from the same packet.

At first, I don't see anything bad if tx_done_cleanup() will free only some segments from
the packet. As long as it is safe - there is no problem with that.
I think rte_eth_tx_done_cleanup() operates on mbuf, not packet quantities.
But in our case I think it doesn't matter, as ixgbe_xmit_cleanup()
mark TXDs as free only when HW is done with all TXDs for that packet.
As long as there is a way to reuse existing code and avoid duplication
(without introducing any degradation) - we should use it.
And I think there is a very good opportunity here to reuse existing
ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
Moreover because your code doesn't follow ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
logic and infrastructure, it introduces unnecessary scans over TXD ring,
and in some cases doesn't work as expected: 

+	while (1) {
+		tx_last = sw_ring[tx_id].last_id;
+
+		if (sw_ring[tx_last].mbuf) {
+			if (txr[tx_last].wb.status &
+					IXGBE_TXD_STAT_DD) {
...
+			} else {
+				/*
+				 * mbuf still in use, nothing left to
+				 * free.
+				 */
+				break;

It is not correct to expect that IXGBE_TXD_STAT_DD will be set on last TXD for *every* packet.
We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.

So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
It would be much less error prone and will help to avoid code duplication.

Konstantin 

> 
> 
> > >This is the first packet that will be
> > > + * attempted to be freed.
> > > + */
> > > +
> > > +/* Get last segment in most recently added packet. */ tx_last =
> > > +sw_ring[txq->tx_tail].last_id;
> > > +
> > > +/* Get the next segment, which is the oldest segment in ring. */
> > > +tx_first = sw_ring[tx_last].next_id;
> > > +
> > > +/* Set the current index to the first. */ tx_id = tx_first;
> > > +
> > > +/*
> > > + * Loop through each packet. For each packet, verify that an
> > > + * mbuf exists and that the last segment is free. If so, free
> > > + * it and move on.
> > > + */
> > > +while (1) {
> > > +tx_last = sw_ring[tx_id].last_id;
> > > +
> > > +if (sw_ring[tx_last].mbuf) {
> > > +if (!(txr[tx_last].wb.status &
> > > +IXGBE_TXD_STAT_DD))
> > > +break;
> > > +
> > > +/* Get the start of the next packet. */ tx_next =
> > > +sw_ring[tx_last].next_id;
> > > +
> > > +/*
> > > + * Loop through all segments in a
> > > + * packet.
> > > + */
> > > +do {
> > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > +sw_ring[tx_id].mbuf = NULL;
> > > +sw_ring[tx_id].last_id = tx_id;
> > > +
> > > +/* Move to next segment. */
> > > +tx_id = sw_ring[tx_id].next_id;
> > > +
> > > +} while (tx_id != tx_next);
> > > +
> > > +/*
> > > + * Increment the number of packets
> > > + * freed.
> > > + */
> > > +count++;
> > > +
> > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > +/*
> > > + * There are multiple reasons to be here:
> > > + * 1) All the packets on the ring have been
> > > + *    freed - tx_id is equal to tx_first
> > > + *    and some packets have been freed.
> > > + *    - Done, exit
> > > + * 2) Interfaces has not sent a rings worth of
> > > + *    packets yet, so the segment after tail is
> > > + *    still empty. Or a previous call to this
> > > + *    function freed some of the segments but
> > > + *    not all so there is a hole in the list.
> > > + *    Hopefully this is a rare case.
> > > + *    - Walk the list and find the next mbuf. If
> > > + *      there isn't one, then done.
> > > + */
> > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > +
> > > +/*
> > > + * Walk the list and find the next mbuf, if any.
> > > + */
> > > +do {
> > > +/* Move to next segment. */
> > > +tx_id = sw_ring[tx_id].next_id;
> > > +
> > > +if (sw_ring[tx_id].mbuf)
> > > +break;
> > > +
> > > +} while (tx_id != tx_first);
> > > +
> > > +/*
> > > + * Determine why previous loop bailed. If there
> > > + * is not an mbuf, done.
> > > + */
> > > +if (sw_ring[tx_id].mbuf == NULL)
> > > +break;
> > > +}
> > > +}
> > > +
> > > +return count;
> > > +}
> > > +
> > >  static void __attribute__((cold))
> > >  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff --git
> > > a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > index 505d344b9..2c3770af6 100644
> > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > @@ -285,6 +285,8 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct
> > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue
> > > *rxq);  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue
> > > *rxq);
> > >
> > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > +
> > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > >
> > > --
> > > 2.17.1
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-05 23:36         ` Ananyev, Konstantin
@ 2020-01-06  9:03           ` Di, ChenxuX
  2020-01-06 13:26             ` Ananyev, Konstantin
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-06  9:03 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Yang, Qiming



> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, January 6, 2020 7:36 AM
> To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
> 
> 
> > > > Add support to the ixgbe driver for the API
> > > > rte_eth_tx_done_cleanup to force free consumed buffers on Tx ring.

[snip]

> > > > + * tx_tail is the last sent packet on the sw_ring. Goto the end
> > > > + * of that packet (the last segment in the packet chain) and
> > > > + * then the next segment will be the start of the oldest segment
> > > > + * in the sw_ring.
> > >
> > > Not sure I understand the sentence above.
> > > tx_tail is the value of TDT HW register (most recently armed by SW TD).
> > > last_id  is the index of last descriptor for multi-seg packet.
> > > next_id is just the index of next descriptor in HW TD ring.
> > > How do you conclude that it will be the ' oldest segment in the sw_ring'?
> > >
> >
> > The tx_tail is the last sent packet on the sw_ring. While the
> > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free <
> tx_free_thresh .
> > So the sw_ring[tx_tail].next_id must be the begin of mbufs which are
> > not used or  Already freed . then begin the loop until the mbuf is used and
> begin to free them.
> >
> >
> >
> > > Another question why do you need to write your own functions?
> > > Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload)
> > > path and
> > > ixgbe_tx_free_bufs() for simple path?
> > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could
> > > be used to determine finished TX descriptors.
> > > Based on that you can you can free appropriate sw_ring[] entries.
> > >
> >
> > The reason why I don't reuse existing function is that they all free
> > several mbufs While the free_cnt of the API rte_eth_tx_done_cleanup() is the
> number of packets.
> > It also need to be done that check which mbuffs are from the same packet.
> 
> At first, I don't see anything bad if tx_done_cleanup() will free only some
> segments from the packet. As long as it is safe - there is no problem with that.
> I think rte_eth_tx_done_cleanup() operates on mbuf, not packet quantities.
> But in our case I think it doesn't matter, as ixgbe_xmit_cleanup() mark TXDs as
> free only when HW is done with all TXDs for that packet.
> As long as there is a way to reuse existing code and avoid duplication (without
> introducing any degradation) - we should use it.
> And I think there is a very good opportunity here to reuse existing
> ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> Moreover because your code doesn't follow
> ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> logic and infrastructure, it introduces unnecessary scans over TXD ring, and in
> some cases doesn't work as expected:
> 
> +while (1) {
> +tx_last = sw_ring[tx_id].last_id;
> +
> +if (sw_ring[tx_last].mbuf) {
> +if (txr[tx_last].wb.status &
> +IXGBE_TXD_STAT_DD) {
> ...
> +} else {
> +/*
> + * mbuf still in use, nothing left to
> + * free.
> + */
> +break;
> 
> It is not correct to expect that IXGBE_TXD_STAT_DD will be set on last TXD for
> *every* packet.
> We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> 
> So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> It would be much less error prone and will help to avoid code duplication.
> 
> Konstantin
> 

At first.
The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will cleanup  TXD wb.status.
the number of status cleanuped is always txq->tx_rs_thresh.

The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that 
	@param free_cnt
 	*   Maximum number of packets to free. Use 0 to indicate all possible packets
 	*   should be freed. Note that a packet may be using multiple mbufs.
a number must be set while ixgbe_xmit_cleanup and ixgbe_tx_free_bufs only have one parameter txq.
And what should do is not only free buffers and status but also check which bufs are from 
 One packet and count the packet freed. 
So I think it can't be implemented that reuse function xmit_cleanup without change it.
And create a new function with the code of xmit_cleanup will cause many duplication.

Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().

Second.
The function in patch is copy from code in igb_rxtx.c. it already updated in 2017,
The commit id is 8d907d2b79f7a54c809f1c44970ff455fa2865e1.
I trust the logic of code is right.
Actually it don't complete for ixgbe, i40e and ice, while it don't change the value of 
last_desc_cleaned and tx_next_dd. And it's beginning prefer last_desc_cleaned or
 tx_next_dd(for offload or simple) to tx_tail. 

So, I suggest to use the old function and fix the issue.

> >
> >
> > > >This is the first packet that will be
> > > > + * attempted to be freed.
> > > > + */
> > > > +
> > > > +/* Get last segment in most recently added packet. */ tx_last =
> > > > +sw_ring[txq->tx_tail].last_id;
> > > > +
> > > > +/* Get the next segment, which is the oldest segment in ring. */
> > > > +tx_first = sw_ring[tx_last].next_id;
> > > > +
> > > > +/* Set the current index to the first. */ tx_id = tx_first;
> > > > +
> > > > +/*
> > > > + * Loop through each packet. For each packet, verify that an
> > > > + * mbuf exists and that the last segment is free. If so, free
> > > > + * it and move on.
> > > > + */
> > > > +while (1) {
> > > > +tx_last = sw_ring[tx_id].last_id;
> > > > +
> > > > +if (sw_ring[tx_last].mbuf) {
> > > > +if (!(txr[tx_last].wb.status &
> > > > +IXGBE_TXD_STAT_DD))
> > > > +break;
> > > > +
> > > > +/* Get the start of the next packet. */ tx_next =
> > > > +sw_ring[tx_last].next_id;
> > > > +
> > > > +/*
> > > > + * Loop through all segments in a
> > > > + * packet.
> > > > + */
> > > > +do {
> > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > +sw_ring[tx_id].mbuf = NULL;
> > > > +sw_ring[tx_id].last_id = tx_id;
> > > > +
> > > > +/* Move to next segment. */
> > > > +tx_id = sw_ring[tx_id].next_id;
> > > > +
> > > > +} while (tx_id != tx_next);
> > > > +
> > > > +/*
> > > > + * Increment the number of packets
> > > > + * freed.
> > > > + */
> > > > +count++;
> > > > +
> > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > +/*
> > > > + * There are multiple reasons to be here:
> > > > + * 1) All the packets on the ring have been
> > > > + *    freed - tx_id is equal to tx_first
> > > > + *    and some packets have been freed.
> > > > + *    - Done, exit
> > > > + * 2) Interfaces has not sent a rings worth of
> > > > + *    packets yet, so the segment after tail is
> > > > + *    still empty. Or a previous call to this
> > > > + *    function freed some of the segments but
> > > > + *    not all so there is a hole in the list.
> > > > + *    Hopefully this is a rare case.
> > > > + *    - Walk the list and find the next mbuf. If
> > > > + *      there isn't one, then done.
> > > > + */
> > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > +
> > > > +/*
> > > > + * Walk the list and find the next mbuf, if any.
> > > > + */
> > > > +do {
> > > > +/* Move to next segment. */
> > > > +tx_id = sw_ring[tx_id].next_id;
> > > > +
> > > > +if (sw_ring[tx_id].mbuf)
> > > > +break;
> > > > +
> > > > +} while (tx_id != tx_first);
> > > > +
> > > > +/*
> > > > + * Determine why previous loop bailed. If there
> > > > + * is not an mbuf, done.
> > > > + */
> > > > +if (sw_ring[tx_id].mbuf == NULL)
> > > > +break;
> > > > +}
> > > > +}
> > > > +
> > > > +return count;
> > > > +}
> > > > +
> > > >  static void __attribute__((cold))  ixgbe_tx_free_swring(struct
> > > > ixgbe_tx_queue *txq)  { diff --git
> > > > a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > index 505d344b9..2c3770af6 100644
> > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > @@ -285,6 +285,8 @@ int
> > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue
> > > > *rxq);  void ixgbe_rx_queue_release_mbufs_vec(struct
> > > > ixgbe_rx_queue *rxq);
> > > >
> > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > +
> > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > >
> > > > --
> > > > 2.17.1
> > >
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-06  9:03           ` Di, ChenxuX
@ 2020-01-06 13:26             ` Ananyev, Konstantin
  2020-01-07 10:46               ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-06 13:26 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming



> > > > > + * tx_tail is the last sent packet on the sw_ring. Goto the end
> > > > > + * of that packet (the last segment in the packet chain) and
> > > > > + * then the next segment will be the start of the oldest segment
> > > > > + * in the sw_ring.
> > > >
> > > > Not sure I understand the sentence above.
> > > > tx_tail is the value of TDT HW register (most recently armed by SW TD).
> > > > last_id  is the index of last descriptor for multi-seg packet.
> > > > next_id is just the index of next descriptor in HW TD ring.
> > > > How do you conclude that it will be the ' oldest segment in the sw_ring'?
> > > >
> > >
> > > The tx_tail is the last sent packet on the sw_ring. While the
> > > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free <
> > tx_free_thresh .
> > > So the sw_ring[tx_tail].next_id must be the begin of mbufs which are
> > > not used or  Already freed . then begin the loop until the mbuf is used and
> > begin to free them.
> > >
> > >
> > >
> > > > Another question why do you need to write your own functions?
> > > > Why can't you reuse existing ixgbe_xmit_cleanup() for full(offload)
> > > > path and
> > > > ixgbe_tx_free_bufs() for simple path?
> > > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it could
> > > > be used to determine finished TX descriptors.
> > > > Based on that you can you can free appropriate sw_ring[] entries.
> > > >
> > >
> > > The reason why I don't reuse existing function is that they all free
> > > several mbufs While the free_cnt of the API rte_eth_tx_done_cleanup() is the
> > number of packets.
> > > It also need to be done that check which mbuffs are from the same packet.
> >
> > At first, I don't see anything bad if tx_done_cleanup() will free only some
> > segments from the packet. As long as it is safe - there is no problem with that.
> > I think rte_eth_tx_done_cleanup() operates on mbuf, not packet quantities.
> > But in our case I think it doesn't matter, as ixgbe_xmit_cleanup() mark TXDs as
> > free only when HW is done with all TXDs for that packet.
> > As long as there is a way to reuse existing code and avoid duplication (without
> > introducing any degradation) - we should use it.
> > And I think there is a very good opportunity here to reuse existing
> > ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> > Moreover because your code doesn’t follow
> > ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> > logic and infrastructure, it introduces unnecessary scans over TXD ring, and in
> > some cases doesn't work as expected:
> >
> > +while (1) {
> > +tx_last = sw_ring[tx_id].last_id;
> > +
> > +if (sw_ring[tx_last].mbuf) {
> > +if (txr[tx_last].wb.status &
> > +IXGBE_TXD_STAT_DD) {
> > ...
> > +} else {
> > +/*
> > + * mbuf still in use, nothing left to
> > + * free.
> > + */
> > +break;
> >
> > It is not correct to expect that IXGBE_TXD_STAT_DD will be set on last TXD for
> > *every* packet.
> > We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> > Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> >
> > So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> > It would be much less error prone and will help to avoid code duplication.
> >
> > Konstantin
> >
> 
> At first.
> The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will cleanup  TXD wb.status.
> the number of status cleanuped is always txq->tx_rs_thresh.

Yes, and what's wrong with it?

> 
> The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that
> 	@param free_cnt
>  	*   Maximum number of packets to free. Use 0 to indicate all possible packets
>  	*   should be freed. Note that a packet may be using multiple mbufs.

I don't think it is a good approach, better would be to report number of freed mbufs,
but ok, as it is a public API, we probably need to keep it as it is.

> a number must be set while ixgbe_xmit_cleanup and ixgbe_tx_free_bufs only have one parameter txq.

Yes, ixgbe_xmit_cleanup() cleans up at least txq->tx_rs_thresh TXDs.
So if user requested more packets to be freed we can call ixgbe_xmit_cleanup()
in a loop. 

> And what should do is not only free buffers and status but also check which bufs are from
>  One packet and count the packet freed.

ixgbe_xmit_cleanup() itself doesn't free mbufs itself.
It only cleans up TXDs. 
So in tx_done_cleanup() after calling ixgbe_xmit_cleanup()
you'll still need to go through sw_ring[]
entries that correspond to free TXDs and call mbuf_seg_free().
You can count number of full packets here. 

> So I think it can't be implemented that reuse function xmit_cleanup without change it.
> And create a new function with the code of xmit_cleanup will cause many duplication.

I don't think it would.
I think all we need here is something like that
(note it is schematic one, doesn't take into accounf that TXD ring is circular):

tx_done_cleanup(..., uint32_t cnt)
{
      /* we have txq->nb_tx_free TXDs starting from txq->tx_tail.
           Scan them first and free as many mbufs as we can.
           If we need more mbufs to free call  ixgbe_xmit_cleanup()
           to free more TXDs. */ 

       swr = txq->sw_ring;
       txr     = txq->tx_ring;
       id   = txq->tx_tail;
       free =  txq->nb_tx_free;       
  
       for (n = 0; n < cnt && free != 0; ) {

          for (j = 0; j != free && n < cnt; j++) {
             swe = &swr[id + j];
             if (swe->mbuf != NULL) {
                   rte_pktmbuf_free_seg(swe->mbuf);
                   swe->mbuf = NULL;
             }
             n += (swe->last_id == id + j)
          } 

          if (n < cnt) { 
               ixgbe_xmit_cleanup(txq);
               free =   txq->nb_tx_free - free;
          }
     }
     return n;
}

> 
> Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().

Totally disagree, see above.

> 
> Second.
> The function in patch is copy from code in igb_rxtx.c. it already updated in 2017,
> The commit id is 8d907d2b79f7a54c809f1c44970ff455fa2865e1.

I realized that.
But I think it as a problem, not a positive thing.
While they do have some similarities, igb abd ixgbe are PMDs for different devices,
and their TX code differs quite a lot. Let say igb doesn't use tx_rs_threshold,
but instead set RS bit for each last TXD.
So, just blindly copying tx_done_cleanup() from igb to ixgbe doesn't look like a good
idea to me.

> I trust the logic of code is right.
> Actually it don't complete for ixgbe, i40e and ice, while it don't change the value of
> last_desc_cleaned and tx_next_dd. And it's beginning prefer last_desc_cleaned or
>  tx_next_dd(for offload or simple) to tx_tail.
> 
> So, I suggest to use the old function and fix the issue.
> 
> > >
> > >
> > > > >This is the first packet that will be
> > > > > + * attempted to be freed.
> > > > > + */
> > > > > +
> > > > > +/* Get last segment in most recently added packet. */ tx_last =
> > > > > +sw_ring[txq->tx_tail].last_id;
> > > > > +
> > > > > +/* Get the next segment, which is the oldest segment in ring. */
> > > > > +tx_first = sw_ring[tx_last].next_id;
> > > > > +
> > > > > +/* Set the current index to the first. */ tx_id = tx_first;
> > > > > +
> > > > > +/*
> > > > > + * Loop through each packet. For each packet, verify that an
> > > > > + * mbuf exists and that the last segment is free. If so, free
> > > > > + * it and move on.
> > > > > + */
> > > > > +while (1) {
> > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > +
> > > > > +if (sw_ring[tx_last].mbuf) {
> > > > > +if (!(txr[tx_last].wb.status &
> > > > > +IXGBE_TXD_STAT_DD))
> > > > > +break;
> > > > > +
> > > > > +/* Get the start of the next packet. */ tx_next =
> > > > > +sw_ring[tx_last].next_id;
> > > > > +
> > > > > +/*
> > > > > + * Loop through all segments in a
> > > > > + * packet.
> > > > > + */
> > > > > +do {
> > > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > > +sw_ring[tx_id].mbuf = NULL;
> > > > > +sw_ring[tx_id].last_id = tx_id;
> > > > > +
> > > > > +/* Move to next segment. */
> > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > +
> > > > > +} while (tx_id != tx_next);
> > > > > +
> > > > > +/*
> > > > > + * Increment the number of packets
> > > > > + * freed.
> > > > > + */
> > > > > +count++;
> > > > > +
> > > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > > +/*
> > > > > + * There are multiple reasons to be here:
> > > > > + * 1) All the packets on the ring have been
> > > > > + *    freed - tx_id is equal to tx_first
> > > > > + *    and some packets have been freed.
> > > > > + *    - Done, exit
> > > > > + * 2) Interfaces has not sent a rings worth of
> > > > > + *    packets yet, so the segment after tail is
> > > > > + *    still empty. Or a previous call to this
> > > > > + *    function freed some of the segments but
> > > > > + *    not all so there is a hole in the list.
> > > > > + *    Hopefully this is a rare case.
> > > > > + *    - Walk the list and find the next mbuf. If
> > > > > + *      there isn't one, then done.
> > > > > + */
> > > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > > +
> > > > > +/*
> > > > > + * Walk the list and find the next mbuf, if any.
> > > > > + */
> > > > > +do {
> > > > > +/* Move to next segment. */
> > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > +
> > > > > +if (sw_ring[tx_id].mbuf)
> > > > > +break;
> > > > > +
> > > > > +} while (tx_id != tx_first);
> > > > > +
> > > > > +/*
> > > > > + * Determine why previous loop bailed. If there
> > > > > + * is not an mbuf, done.
> > > > > + */
> > > > > +if (sw_ring[tx_id].mbuf == NULL)
> > > > > +break;
> > > > > +}
> > > > > +}
> > > > > +
> > > > > +return count;
> > > > > +}
> > > > > +
> > > > >  static void __attribute__((cold))  ixgbe_tx_free_swring(struct
> > > > > ixgbe_tx_queue *txq)  { diff --git
> > > > > a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > index 505d344b9..2c3770af6 100644
> > > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > @@ -285,6 +285,8 @@ int
> > > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue
> > > > > *rxq);  void ixgbe_rx_queue_release_mbufs_vec(struct
> > > > > ixgbe_rx_queue *rxq);
> > > > >
> > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > > +
> > > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > > >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > > >
> > > > > --
> > > > > 2.17.1
> > > >
> > >
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-06 13:26             ` Ananyev, Konstantin
@ 2020-01-07 10:46               ` Di, ChenxuX
  2020-01-07 14:09                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-07 10:46 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Yang, Qiming

Hi

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, January 6, 2020 9:26 PM
> To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
> 
> 
> 
> > > > > > + * tx_tail is the last sent packet on the sw_ring. Goto the
> > > > > > + end
> > > > > > + * of that packet (the last segment in the packet chain) and
> > > > > > + * then the next segment will be the start of the oldest
> > > > > > + segment
> > > > > > + * in the sw_ring.
> > > > >
> > > > > Not sure I understand the sentence above.
> > > > > tx_tail is the value of TDT HW register (most recently armed by SW TD).
> > > > > last_id  is the index of last descriptor for multi-seg packet.
> > > > > next_id is just the index of next descriptor in HW TD ring.
> > > > > How do you conclude that it will be the ' oldest segment in the sw_ring'?
> > > > >
> > > >
> > > > The tx_tail is the last sent packet on the sw_ring. While the
> > > > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free <
> > > tx_free_thresh .
> > > > So the sw_ring[tx_tail].next_id must be the begin of mbufs which
> > > > are not used or  Already freed . then begin the loop until the
> > > > mbuf is used and
> > > begin to free them.
> > > >
> > > >
> > > >
> > > > > Another question why do you need to write your own functions?
> > > > > Why can't you reuse existing ixgbe_xmit_cleanup() for
> > > > > full(offload) path and
> > > > > ixgbe_tx_free_bufs() for simple path?
> > > > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it
> > > > > could be used to determine finished TX descriptors.
> > > > > Based on that you can you can free appropriate sw_ring[] entries.
> > > > >
> > > >
> > > > The reason why I don't reuse existing function is that they all
> > > > free several mbufs While the free_cnt of the API
> > > > rte_eth_tx_done_cleanup() is the
> > > number of packets.
> > > > It also need to be done that check which mbuffs are from the same packet.
> > >
> > > At first, I don't see anything bad if tx_done_cleanup() will free
> > > only some segments from the packet. As long as it is safe - there is no
> problem with that.
> > > I think rte_eth_tx_done_cleanup() operates on mbuf, not packet quantities.
> > > But in our case I think it doesn't matter, as ixgbe_xmit_cleanup()
> > > mark TXDs as free only when HW is done with all TXDs for that packet.
> > > As long as there is a way to reuse existing code and avoid
> > > duplication (without introducing any degradation) - we should use it.
> > > And I think there is a very good opportunity here to reuse existing
> > > ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> > > Moreover because your code doesn’t follow
> > > ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> > > logic and infrastructure, it introduces unnecessary scans over TXD
> > > ring, and in some cases doesn't work as expected:
> > >
> > > +while (1) {
> > > +tx_last = sw_ring[tx_id].last_id;
> > > +
> > > +if (sw_ring[tx_last].mbuf) {
> > > +if (txr[tx_last].wb.status &
> > > +IXGBE_TXD_STAT_DD) {
> > > ...
> > > +} else {
> > > +/*
> > > + * mbuf still in use, nothing left to
> > > + * free.
> > > + */
> > > +break;
> > >
> > > It is not correct to expect that IXGBE_TXD_STAT_DD will be set on
> > > last TXD for
> > > *every* packet.
> > > We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> > > Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> > >
> > > So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> > > It would be much less error prone and will help to avoid code duplication.
> > >
> > > Konstantin
> > >
> >
> > At first.
> > The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will cleanup
> TXD wb.status.
> > the number of status cleanuped is always txq->tx_rs_thresh.
> 
> Yes, and what's wrong with it?
> 
> >
> > The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that @param
> > free_cnt
> >  *   Maximum number of packets to free. Use 0 to indicate all possible packets
> >  *   should be freed. Note that a packet may be using multiple mbufs.
> 
> I don't think it is a good approach, better would be to report number of freed
> mbufs, but ok, as it is a public API, we probably need to keep it as it is.
> 
> > a number must be set while ixgbe_xmit_cleanup and ixgbe_tx_free_bufs only
> have one parameter txq.
> 
> Yes, ixgbe_xmit_cleanup() cleans up at least txq->tx_rs_thresh TXDs.
> So if user requested more packets to be freed we can call ixgbe_xmit_cleanup()
> in a loop.
> 

That is a great idea and I discuss with my workmate about it today. there is also some
Question that we don’t confirm. 
Actually it can call ixgbe_xmit_cleanup() in a loop if user requested more packets,
How to deal with the MOD. For example:
	The default tx_rs_thresh is 32.if the count of mbufs we need free is 50.
	We can call ixgbe_xmit_cleanup() one time to free 32 mbufs.
	Then how about other 18 mbufs. 
	1.If we do nothing, it looks not good. 
	2.if we call ixgbe_xmit_cleanup() successfully, we free 14 mbufs more.
	3.if we call ixgbe_xmit_cleanup() fail, the status of No.32 mbufs is not DD
	  While No .18 is DD. So we do not free 18 mbufs what we can and should free.

We have try some plans about it, likes add parameter for ixgbe_xmit_cleanup(), change 
 Tx_rs_thresh or copy the code or ixgbe_xmit_cleanup() as a new function with more 
Parameter. But all of them seem not perfect. 

So  can you give some comment about it? It seems not easy as we think by reuse function.


> > And what should do is not only free buffers and status but also check
> > which bufs are from  One packet and count the packet freed.
> 
> ixgbe_xmit_cleanup() itself doesn't free mbufs itself.
> It only cleans up TXDs.
> So in tx_done_cleanup() after calling ixgbe_xmit_cleanup() you'll still need to go
> through sw_ring[] entries that correspond to free TXDs and call mbuf_seg_free().
> You can count number of full packets here.
> 
> > So I think it can't be implemented that reuse function xmit_cleanup without
> change it.
> > And create a new function with the code of xmit_cleanup will cause many
> duplication.
> 
> I don't think it would.
> I think all we need here is something like that (note it is schematic one, doesn't
> take into accounf that TXD ring is circular):
> 
> tx_done_cleanup(..., uint32_t cnt)
> {
>       /* we have txq->nb_tx_free TXDs starting from txq->tx_tail.
>            Scan them first and free as many mbufs as we can.
>            If we need more mbufs to free call  ixgbe_xmit_cleanup()
>            to free more TXDs. */
> 
>        swr = txq->sw_ring;
>        txr     = txq->tx_ring;
>        id   = txq->tx_tail;
>        free =  txq->nb_tx_free;
> 
>        for (n = 0; n < cnt && free != 0; ) {
> 
>           for (j = 0; j != free && n < cnt; j++) {
>              swe = &swr[id + j];
>              if (swe->mbuf != NULL) {
>                    rte_pktmbuf_free_seg(swe->mbuf);
>                    swe->mbuf = NULL;
>              }
>              n += (swe->last_id == id + j)
>           }
> 
>           if (n < cnt) {
>                ixgbe_xmit_cleanup(txq);
>                free =   txq->nb_tx_free - free;
>           }
>      }
>      return n;
> }
> 
> >
> > Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().
> 
> Totally disagree, see above.
> 
> >
> > Second.
> > The function in patch is copy from code in igb_rxtx.c. it already
> > updated in 2017, The commit id is
> 8d907d2b79f7a54c809f1c44970ff455fa2865e1.
> 
> I realized that.
> But I think it as a problem, not a positive thing.
> While they do have some similarities, igb abd ixgbe are PMDs for different
> devices, and their TX code differs quite a lot. Let say igb doesn't use
> tx_rs_threshold, but instead set RS bit for each last TXD.
> So, just blindly copying tx_done_cleanup() from igb to ixgbe doesn't look like a
> good idea to me.
> 
> > I trust the logic of code is right.
> > Actually it don't complete for ixgbe, i40e and ice, while it don't
> > change the value of last_desc_cleaned and tx_next_dd. And it's
> > beginning prefer last_desc_cleaned or  tx_next_dd(for offload or simple) to
> tx_tail.
> >
> > So, I suggest to use the old function and fix the issue.
> >
> > > >
> > > >
> > > > > >This is the first packet that will be
> > > > > > + * attempted to be freed.
> > > > > > + */
> > > > > > +
> > > > > > +/* Get last segment in most recently added packet. */ tx_last
> > > > > > += sw_ring[txq->tx_tail].last_id;
> > > > > > +
> > > > > > +/* Get the next segment, which is the oldest segment in ring.
> > > > > > +*/ tx_first = sw_ring[tx_last].next_id;
> > > > > > +
> > > > > > +/* Set the current index to the first. */ tx_id = tx_first;
> > > > > > +
> > > > > > +/*
> > > > > > + * Loop through each packet. For each packet, verify that an
> > > > > > + * mbuf exists and that the last segment is free. If so, free
> > > > > > + * it and move on.
> > > > > > + */
> > > > > > +while (1) {
> > > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > > +
> > > > > > +if (sw_ring[tx_last].mbuf) {
> > > > > > +if (!(txr[tx_last].wb.status &
> > > > > > +IXGBE_TXD_STAT_DD))
> > > > > > +break;
> > > > > > +
> > > > > > +/* Get the start of the next packet. */ tx_next =
> > > > > > +sw_ring[tx_last].next_id;
> > > > > > +
> > > > > > +/*
> > > > > > + * Loop through all segments in a
> > > > > > + * packet.
> > > > > > + */
> > > > > > +do {
> > > > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > > > +sw_ring[tx_id].mbuf = NULL;
> > > > > > +sw_ring[tx_id].last_id = tx_id;
> > > > > > +
> > > > > > +/* Move to next segment. */
> > > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > > +
> > > > > > +} while (tx_id != tx_next);
> > > > > > +
> > > > > > +/*
> > > > > > + * Increment the number of packets
> > > > > > + * freed.
> > > > > > + */
> > > > > > +count++;
> > > > > > +
> > > > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > > > +/*
> > > > > > + * There are multiple reasons to be here:
> > > > > > + * 1) All the packets on the ring have been
> > > > > > + *    freed - tx_id is equal to tx_first
> > > > > > + *    and some packets have been freed.
> > > > > > + *    - Done, exit
> > > > > > + * 2) Interfaces has not sent a rings worth of
> > > > > > + *    packets yet, so the segment after tail is
> > > > > > + *    still empty. Or a previous call to this
> > > > > > + *    function freed some of the segments but
> > > > > > + *    not all so there is a hole in the list.
> > > > > > + *    Hopefully this is a rare case.
> > > > > > + *    - Walk the list and find the next mbuf. If
> > > > > > + *      there isn't one, then done.
> > > > > > + */
> > > > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > > > +
> > > > > > +/*
> > > > > > + * Walk the list and find the next mbuf, if any.
> > > > > > + */
> > > > > > +do {
> > > > > > +/* Move to next segment. */
> > > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > > +
> > > > > > +if (sw_ring[tx_id].mbuf)
> > > > > > +break;
> > > > > > +
> > > > > > +} while (tx_id != tx_first);
> > > > > > +
> > > > > > +/*
> > > > > > + * Determine why previous loop bailed. If there
> > > > > > + * is not an mbuf, done.
> > > > > > + */
> > > > > > +if (sw_ring[tx_id].mbuf == NULL) break; } }
> > > > > > +
> > > > > > +return count;
> > > > > > +}
> > > > > > +
> > > > > >  static void __attribute__((cold))
> > > > > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff --git
> > > > > > a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9..2c3770af6
> > > > > > 100644
> > > > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > @@ -285,6 +285,8 @@ int
> > > > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct
> > > > > > ixgbe_rx_queue *rxq);  void
> > > > > > ixgbe_rx_queue_release_mbufs_vec(struct
> > > > > > ixgbe_rx_queue *rxq);
> > > > > >
> > > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > > > +
> > > > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > > > >  extern const uint32_t
> > > > > > ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > > > >
> > > > > > --
> > > > > > 2.17.1
> > > > >
> > > >
> > >
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-07 10:46               ` Di, ChenxuX
@ 2020-01-07 14:09                 ` Ananyev, Konstantin
  2020-01-08 10:15                   ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-07 14:09 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming


Hi Chenxu,

> > > > > > > + * tx_tail is the last sent packet on the sw_ring. Goto the
> > > > > > > + end
> > > > > > > + * of that packet (the last segment in the packet chain) and
> > > > > > > + * then the next segment will be the start of the oldest
> > > > > > > + segment
> > > > > > > + * in the sw_ring.
> > > > > >
> > > > > > Not sure I understand the sentence above.
> > > > > > tx_tail is the value of TDT HW register (most recently armed by SW TD).
> > > > > > last_id  is the index of last descriptor for multi-seg packet.
> > > > > > next_id is just the index of next descriptor in HW TD ring.
> > > > > > How do you conclude that it will be the ' oldest segment in the sw_ring'?
> > > > > >
> > > > >
> > > > > The tx_tail is the last sent packet on the sw_ring. While the
> > > > > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free <
> > > > tx_free_thresh .
> > > > > So the sw_ring[tx_tail].next_id must be the begin of mbufs which
> > > > > are not used or  Already freed . then begin the loop until the
> > > > > mbuf is used and
> > > > begin to free them.
> > > > >
> > > > >
> > > > >
> > > > > > Another question why do you need to write your own functions?
> > > > > > Why can't you reuse existing ixgbe_xmit_cleanup() for
> > > > > > full(offload) path and
> > > > > > ixgbe_tx_free_bufs() for simple path?
> > > > > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least it
> > > > > > could be used to determine finished TX descriptors.
> > > > > > Based on that you can you can free appropriate sw_ring[] entries.
> > > > > >
> > > > >
> > > > > The reason why I don't reuse existing function is that they all
> > > > > free several mbufs While the free_cnt of the API
> > > > > rte_eth_tx_done_cleanup() is the
> > > > number of packets.
> > > > > It also need to be done that check which mbuffs are from the same packet.
> > > >
> > > > At first, I don't see anything bad if tx_done_cleanup() will free
> > > > only some segments from the packet. As long as it is safe - there is no
> > problem with that.
> > > > I think rte_eth_tx_done_cleanup() operates on mbuf, not packet quantities.
> > > > But in our case I think it doesn't matter, as ixgbe_xmit_cleanup()
> > > > mark TXDs as free only when HW is done with all TXDs for that packet.
> > > > As long as there is a way to reuse existing code and avoid
> > > > duplication (without introducing any degradation) - we should use it.
> > > > And I think there is a very good opportunity here to reuse existing
> > > > ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> > > > Moreover because your code doesn’t follow
> > > > ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> > > > logic and infrastructure, it introduces unnecessary scans over TXD
> > > > ring, and in some cases doesn't work as expected:
> > > >
> > > > +while (1) {
> > > > +tx_last = sw_ring[tx_id].last_id;
> > > > +
> > > > +if (sw_ring[tx_last].mbuf) {
> > > > +if (txr[tx_last].wb.status &
> > > > +IXGBE_TXD_STAT_DD) {
> > > > ...
> > > > +} else {
> > > > +/*
> > > > + * mbuf still in use, nothing left to
> > > > + * free.
> > > > + */
> > > > +break;
> > > >
> > > > It is not correct to expect that IXGBE_TXD_STAT_DD will be set on
> > > > last TXD for
> > > > *every* packet.
> > > > We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> > > > Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> > > >
> > > > So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> > > > It would be much less error prone and will help to avoid code duplication.
> > > >
> > > > Konstantin
> > > >
> > >
> > > At first.
> > > The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will cleanup
> > TXD wb.status.
> > > the number of status cleanuped is always txq->tx_rs_thresh.
> >
> > Yes, and what's wrong with it?
> >
> > >
> > > The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that @param
> > > free_cnt
> > >  *   Maximum number of packets to free. Use 0 to indicate all possible packets
> > >  *   should be freed. Note that a packet may be using multiple mbufs.
> >
> > I don't think it is a good approach, better would be to report number of freed
> > mbufs, but ok, as it is a public API, we probably need to keep it as it is.
> >
> > > a number must be set while ixgbe_xmit_cleanup and ixgbe_tx_free_bufs only
> > have one parameter txq.
> >
> > Yes, ixgbe_xmit_cleanup() cleans up at least txq->tx_rs_thresh TXDs.
> > So if user requested more packets to be freed we can call ixgbe_xmit_cleanup()
> > in a loop.
> >
> 
> That is a great idea and I discuss with my workmate about it today. there is also some
> Question that we don’t confirm.
> Actually it can call ixgbe_xmit_cleanup() in a loop if user requested more packets,
> How to deal with the MOD. For example:
> 	The default tx_rs_thresh is 32.if the count of mbufs we need free is 50.
> 	We can call ixgbe_xmit_cleanup() one time to free 32 mbufs.
> 	Then how about other 18 mbufs.
> 	1.If we do nothing, it looks not good.
> 	2.if we call ixgbe_xmit_cleanup() successfully, we free 14 mbufs more.
> 	3.if we call ixgbe_xmit_cleanup() fail, the status of No.32 mbufs is not DD
> 	  While No .18 is DD. So we do not free 18 mbufs what we can and should free.
> 
> We have try some plans about it, likes add parameter for ixgbe_xmit_cleanup(), change
>  Tx_rs_thresh or copy the code or ixgbe_xmit_cleanup() as a new function with more
> Parameter. But all of them seem not perfect.
> 
> So  can you give some comment about it? It seems not easy as we think by reuse function.

My thought about it:
for situations when cnt % rs_thresh != 0
we'll still call ixgbe_xmit_cleanup() cnt / rs_thresh + 1 times.
But then, we can free just cnt % rs_thresh mbufs, while keeping
rest of them intact. 
  
Let say at some moment we have txq->nb_tx_free==0,
and user called tx_done_cleanup(txq, ..., cnt==50)
So we call ixgbe_xmit_cleanup(txq) first time.
Suppose it frees 32 TXDs, then we walk through corresponding
sw_ring[] entries and let say free 32 packets (one mbuf per packet).
Then we call ixgbe_xmit_cleanup(txq)  second time.  
Suppose it will free another 32 TXDs, we again walk thour sw_ring[],
but free only first 18 mbufs and return.
Suppose we call tx_done_cleanup(txq, cnt=50) immediately again.
Now txq->nb_tx_free==64, so we can start to scan sw_entries[]
from tx_tail straightway. We'll skip first 50 entries as they are 
already empty, then free remaining 14 mbufs, then will
call ixgbe_xmit_cleanup(txq) again, and if it would be successful,
will scan and free another 32 sw_ring[] entries.
Then again will call ixgbe_xmit_cleanup(txq), but will free
only first 8 available sw_ring[].mbuf.   
Probably a bit easier with the code:

tx_done_cleanup(..., uint32_t cnt)
{
       swr = txq->sw_ring;
       txr     = txq->tx_ring;
       id   = txq->tx_tail;
  
      if (txq->nb_tx_free == 0)
         ixgbe_xmit_cleanup(txq);

      free =  txq->nb_tx_free;       
      for (n = 0; n < cnt && free != 0; ) {

          for (j = 0; j != free && n < cnt; j++) {
             swe = &swr[id + j];
             if (swe->mbuf != NULL) {
                   rte_pktmbuf_free_seg(swe->mbuf);
                   swe->mbuf = NULL;
             
                  /* last segment in the packet, increment packet count */
                  n += (swe->last_id == id + j);
             }
          } 

          if (n < cnt) { 
               ixgbe_xmit_cleanup(txq);
               free =   txq->nb_tx_free - free;
          }
     }
     return n;
}

For the situation when there are less then rx_thresh free TXDs
((txq->tx_ring[desc_to_clean_to].wb.status & IXGBE_TXD_STAT_DD) == 0)
we do nothing - in that case we consider there are no more mbufs to free.

> 
> 
> > > And what should do is not only free buffers and status but also check
> > > which bufs are from  One packet and count the packet freed.
> >
> > ixgbe_xmit_cleanup() itself doesn't free mbufs itself.
> > It only cleans up TXDs.
> > So in tx_done_cleanup() after calling ixgbe_xmit_cleanup() you'll still need to go
> > through sw_ring[] entries that correspond to free TXDs and call mbuf_seg_free().
> > You can count number of full packets here.
> >
> > > So I think it can't be implemented that reuse function xmit_cleanup without
> > change it.
> > > And create a new function with the code of xmit_cleanup will cause many
> > duplication.
> >
> > I don't think it would.
> > I think all we need here is something like that (note it is schematic one, doesn't
> > take into accounf that TXD ring is circular):
> >
> > tx_done_cleanup(..., uint32_t cnt)
> > {
> >       /* we have txq->nb_tx_free TXDs starting from txq->tx_tail.
> >            Scan them first and free as many mbufs as we can.
> >            If we need more mbufs to free call  ixgbe_xmit_cleanup()
> >            to free more TXDs. */
> >
> >        swr = txq->sw_ring;
> >        txr     = txq->tx_ring;
> >        id   = txq->tx_tail;
> >        free =  txq->nb_tx_free;
> >
> >        for (n = 0; n < cnt && free != 0; ) {
> >
> >           for (j = 0; j != free && n < cnt; j++) {
> >              swe = &swr[id + j];
> >              if (swe->mbuf != NULL) {
> >                    rte_pktmbuf_free_seg(swe->mbuf);
> >                    swe->mbuf = NULL;
> >              }
> >              n += (swe->last_id == id + j)
> >           }
> >
> >           if (n < cnt) {
> >                ixgbe_xmit_cleanup(txq);
> >                free =   txq->nb_tx_free - free;
> >           }
> >      }
> >      return n;
> > }
> >
> > >
> > > Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().
> >
> > Totally disagree, see above.
> >
> > >
> > > Second.
> > > The function in patch is copy from code in igb_rxtx.c. it already
> > > updated in 2017, The commit id is
> > 8d907d2b79f7a54c809f1c44970ff455fa2865e1.
> >
> > I realized that.
> > But I think it as a problem, not a positive thing.
> > While they do have some similarities, igb abd ixgbe are PMDs for different
> > devices, and their TX code differs quite a lot. Let say igb doesn't use
> > tx_rs_threshold, but instead set RS bit for each last TXD.
> > So, just blindly copying tx_done_cleanup() from igb to ixgbe doesn't look like a
> > good idea to me.
> >
> > > I trust the logic of code is right.
> > > Actually it don't complete for ixgbe, i40e and ice, while it don't
> > > change the value of last_desc_cleaned and tx_next_dd. And it's
> > > beginning prefer last_desc_cleaned or  tx_next_dd(for offload or simple) to
> > tx_tail.
> > >
> > > So, I suggest to use the old function and fix the issue.
> > >
> > > > >
> > > > >
> > > > > > >This is the first packet that will be
> > > > > > > + * attempted to be freed.
> > > > > > > + */
> > > > > > > +
> > > > > > > +/* Get last segment in most recently added packet. */ tx_last
> > > > > > > += sw_ring[txq->tx_tail].last_id;
> > > > > > > +
> > > > > > > +/* Get the next segment, which is the oldest segment in ring.
> > > > > > > +*/ tx_first = sw_ring[tx_last].next_id;
> > > > > > > +
> > > > > > > +/* Set the current index to the first. */ tx_id = tx_first;
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Loop through each packet. For each packet, verify that an
> > > > > > > + * mbuf exists and that the last segment is free. If so, free
> > > > > > > + * it and move on.
> > > > > > > + */
> > > > > > > +while (1) {
> > > > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > > > +
> > > > > > > +if (sw_ring[tx_last].mbuf) {
> > > > > > > +if (!(txr[tx_last].wb.status &
> > > > > > > +IXGBE_TXD_STAT_DD))
> > > > > > > +break;
> > > > > > > +
> > > > > > > +/* Get the start of the next packet. */ tx_next =
> > > > > > > +sw_ring[tx_last].next_id;
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Loop through all segments in a
> > > > > > > + * packet.
> > > > > > > + */
> > > > > > > +do {
> > > > > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > > > > +sw_ring[tx_id].mbuf = NULL;
> > > > > > > +sw_ring[tx_id].last_id = tx_id;
> > > > > > > +
> > > > > > > +/* Move to next segment. */
> > > > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > > > +
> > > > > > > +} while (tx_id != tx_next);
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Increment the number of packets
> > > > > > > + * freed.
> > > > > > > + */
> > > > > > > +count++;
> > > > > > > +
> > > > > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > > > > +/*
> > > > > > > + * There are multiple reasons to be here:
> > > > > > > + * 1) All the packets on the ring have been
> > > > > > > + *    freed - tx_id is equal to tx_first
> > > > > > > + *    and some packets have been freed.
> > > > > > > + *    - Done, exit
> > > > > > > + * 2) Interfaces has not sent a rings worth of
> > > > > > > + *    packets yet, so the segment after tail is
> > > > > > > + *    still empty. Or a previous call to this
> > > > > > > + *    function freed some of the segments but
> > > > > > > + *    not all so there is a hole in the list.
> > > > > > > + *    Hopefully this is a rare case.
> > > > > > > + *    - Walk the list and find the next mbuf. If
> > > > > > > + *      there isn't one, then done.
> > > > > > > + */
> > > > > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Walk the list and find the next mbuf, if any.
> > > > > > > + */
> > > > > > > +do {
> > > > > > > +/* Move to next segment. */
> > > > > > > +tx_id = sw_ring[tx_id].next_id;
> > > > > > > +
> > > > > > > +if (sw_ring[tx_id].mbuf)
> > > > > > > +break;
> > > > > > > +
> > > > > > > +} while (tx_id != tx_first);
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * Determine why previous loop bailed. If there
> > > > > > > + * is not an mbuf, done.
> > > > > > > + */
> > > > > > > +if (sw_ring[tx_id].mbuf == NULL) break; } }
> > > > > > > +
> > > > > > > +return count;
> > > > > > > +}
> > > > > > > +
> > > > > > >  static void __attribute__((cold))
> > > > > > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff --git
> > > > > > > a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9..2c3770af6
> > > > > > > 100644
> > > > > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > @@ -285,6 +285,8 @@ int
> > > > > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct
> > > > > > > ixgbe_rx_queue *rxq);  void
> > > > > > > ixgbe_rx_queue_release_mbufs_vec(struct
> > > > > > > ixgbe_rx_queue *rxq);
> > > > > > >
> > > > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > > > > +
> > > > > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > > > > >  extern const uint32_t
> > > > > > > ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > > > > >
> > > > > > > --
> > > > > > > 2.17.1
> > > > > >
> > > > >
> > > >
> > >
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-07 14:09                 ` Ananyev, Konstantin
@ 2020-01-08 10:15                   ` Di, ChenxuX
  2020-01-08 15:12                     ` Ananyev, Konstantin
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-08 10:15 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Yang, Qiming

Hi, Konstantin

Thanks for your read.

for our research, we don't think it is a good plan that reuse ixgbe_xmit_cleanup() or ixgbe_tx_free_bufs.
following two opinion will show the reason.


first, it doesn't solve the isituations when cnt % rs_thresh != 0 even by using your plan.

For example, the parameters of API rte_eth_tx_done_cleanup free_cnt=0, that means we want to free all possible mbufs(one mbuf per packet). we find it after checking that
No.18 mbuf status is DD, and the status of mbuf after No.19 is in use.

If the plan is that call ixgbe_xmit_cleanup() cnt /rs_thresh + 1 times( 1 in this example).
After ixgbe_xmit_cleanup()  the value of last_desc_cleaned is not same as the actual.  
the function ixgbe_xmit_pkts will call ixgbe_xmit_cleanup automatically when nb_tx_free<tx_free_thresh, mbufs before last_desc_cleaned (No.19~No.31 for this example)will not be freed

however, all of above is for ixgbe_xmit_pkts() and ixgbe_xmit_cleanup(). if that for ixgbe_xmit_pkts_simple() and ixgbe_tx_free_bufs(),it may be worse.
because the free buff action is in the function ixgbe_tx_free_bufs, we won't  do free in the outside code. And  no mbufs will be freed.

If you do free in outside code for the MOD mbufs and update the value of tx_next_dd and nb_tx_free. there is no need to call ixgbe_tx_free_bufs.
 
here is two case about tx_done_cleanup and ixgbe_tx_free_bufs. it may be more detail

Case 1
Status:
	Suppose before calling tx_done_cleanup(..., uint32_t cnt), txq status are as followings.
	
	one mbuf per packet
	txq->tx_rs_thresh = 32;
	clean mbuf from index id
	txr[id+9].wb.status & IXGBE_TXD_STAT_DD != 0.   // means we cloud free 10 packets
	txr[id+31].wb.status & IXGBE_TXD_STAT_DD == 0.  // means we cloud not clean 32 packets

Process Flow:
	tx_done_cleanup(..., 10) be invoked to free 10 packets.
	Firstly ,tx_done_cleanup will invoke ixgbe_xmit_cleanup to count how many mbufs could free.
	But ixgbe_xmit_cleanup will return -1 (means no mbufs to free), please ref code bellow

	status = txr[desc_to_clean_to].wb.status;
	if (!(status & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) {
		PMD_TX_FREE_LOG(DEBUG,
				"TX descriptor %4u is not done"
				"(port=%d queue=%d)",
				desc_to_clean_to,
				txq->port_id, txq->queue_id);
		/* Failed to clean any descriptors, better luck next time */
		return -(1);
	}

Result:
	We do nothing

Expect:
	Free 10 packets.

Thoughts:
	If we try to check status from txr[id].wb.status to txr[id+31].wb.status one by one, we could find txr[id].wb.status, txr[id+1].wb.status, txr[id+2].wb.status ……txr[id+9].wb.status, all of them status are IXGBE_TXD_STAT_DD, so actually we have the ability to free 10 packets.



Case 2:
Status:
	Suppose before calling tx_done_cleanup(..., uint32_t cnt), txq status are as followings.

	one mbuf per packet
	txq->tx_rs_thresh = 32;
	clean mbuf from index id
	txr[id+31].wb.status & IXGBE_TXD_STAT_DD != 0.   // means we cloud free 32 packets

Process Flow:
	tx_done_cleanup(..., 10) be invoked to free 10 packets.
	When tx_done_cleanup invoke ixgbe_tx_free_bufs free bufs, it will free 32 packets..

Result:
	Free 32 packets.

Expect:
	Free 10 packets.

Thoughts:
	If we try to check status from txr[id].wb.status to txr[id+31].wb.status one by one, we could find txr[id].wb.status, txr[id+1].wb.status, txr[id+2].wb.status ……txr[id+10].wb.status, all of them status are IXGBE_TXD_STAT_DD, so we have the ability to free 10 packets only.




 second, we have a lot of codes what is same as it in function ixgbe_xmit_cleanup.
 

 we do analysis for function ixgbe_tx_free_bufs and ixgbe_xmit_cleanup and segment their actions.
 the actual action of codes are following points:
1. Determine the start position of the free action.
2. Check whether the status of the end position is DD
3. Free ( set status=0 in xmit_cleanup() while call rte_mempool_put_bulk() in tx_free_bufs())
4. Update location variables( last_desc_cleaned in xmit_cleanup() and  tx_next_dd  in tx_free_bufs() and nb_tx_free in both).
 
If reuse ixgbe_xmit_cleanup, we need get the number of mbufs before calling ixgbe_xmit_cleanup()
We also need to implement the following actions:
1. Determine the starting position
2. Find the last mbuf of each PKT and confirm whether its status is DD
3. Free (don't do for tx_free_bufs())
4. call xmit_cleanup function in loop.
 
By comparison, it is possible to see that 3/4 functions of ixgbe_xmit_cleanup are already being called before, 
it means that ixgbe_xmit_cleanup is not highly reusable, the repeat codes is so many.
 
 
following code is the main code in ixgbe_xmit_cleanup() and the action code.
 
 1.
	last_desc_cleaned = txq->last_desc_cleaned;
 
 2.
	/* Determine the last descriptor needing to be cleaned */
	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
	if (desc_to_clean_to >= nb_tx_desc)
		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);

	/* Check to make sure the last descriptor to clean is done */
	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
	status = txr[desc_to_clean_to].wb.status;
	if (!(status & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) {
		PMD_TX_FREE_LOG(DEBUG,
		return -(1);
	}
3.
	/* Figure out how many descriptors will be cleaned */
	if (last_desc_cleaned > desc_to_clean_to)
		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
							desc_to_clean_to);
	else
		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
						last_desc_cleaned);
	txr[desc_to_clean_to].wb.status = 0;
4.
	/* Update the txq to reflect the last descriptor that was cleaned */
	txq->last_desc_cleaned = desc_to_clean_to;
	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
 








> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, January 7, 2020 10:10 PM
> To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
> 
> 
> Hi Chenxu,
> 
> > > > > > > > + * tx_tail is the last sent packet on the sw_ring. Goto
> > > > > > > > + the end
> > > > > > > > + * of that packet (the last segment in the packet chain)
> > > > > > > > + and
> > > > > > > > + * then the next segment will be the start of the oldest
> > > > > > > > + segment
> > > > > > > > + * in the sw_ring.
> > > > > > >
> > > > > > > Not sure I understand the sentence above.
> > > > > > > tx_tail is the value of TDT HW register (most recently armed by SW
> TD).
> > > > > > > last_id  is the index of last descriptor for multi-seg packet.
> > > > > > > next_id is just the index of next descriptor in HW TD ring.
> > > > > > > How do you conclude that it will be the ' oldest segment in the
> sw_ring'?
> > > > > > >
> > > > > >
> > > > > > The tx_tail is the last sent packet on the sw_ring. While the
> > > > > > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free
> > > > > > <
> > > > > tx_free_thresh .
> > > > > > So the sw_ring[tx_tail].next_id must be the begin of mbufs
> > > > > > which are not used or  Already freed . then begin the loop
> > > > > > until the mbuf is used and
> > > > > begin to free them.
> > > > > >
> > > > > >
> > > > > >
> > > > > > > Another question why do you need to write your own functions?
> > > > > > > Why can't you reuse existing ixgbe_xmit_cleanup() for
> > > > > > > full(offload) path and
> > > > > > > ixgbe_tx_free_bufs() for simple path?
> > > > > > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least
> > > > > > > it could be used to determine finished TX descriptors.
> > > > > > > Based on that you can you can free appropriate sw_ring[] entries.
> > > > > > >
> > > > > >
> > > > > > The reason why I don't reuse existing function is that they
> > > > > > all free several mbufs While the free_cnt of the API
> > > > > > rte_eth_tx_done_cleanup() is the
> > > > > number of packets.
> > > > > > It also need to be done that check which mbuffs are from the same
> packet.
> > > > >
> > > > > At first, I don't see anything bad if tx_done_cleanup() will
> > > > > free only some segments from the packet. As long as it is safe -
> > > > > there is no
> > > problem with that.
> > > > > I think rte_eth_tx_done_cleanup() operates on mbuf, not packet
> quantities.
> > > > > But in our case I think it doesn't matter, as
> > > > > ixgbe_xmit_cleanup() mark TXDs as free only when HW is done with all
> TXDs for that packet.
> > > > > As long as there is a way to reuse existing code and avoid
> > > > > duplication (without introducing any degradation) - we should use it.
> > > > > And I think there is a very good opportunity here to reuse
> > > > > existing
> > > > > ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> > > > > Moreover because your code doesn’t follow
> > > > > ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> > > > > logic and infrastructure, it introduces unnecessary scans over
> > > > > TXD ring, and in some cases doesn't work as expected:
> > > > >
> > > > > +while (1) {
> > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > +
> > > > > +if (sw_ring[tx_last].mbuf) {
> > > > > +if (txr[tx_last].wb.status &
> > > > > +IXGBE_TXD_STAT_DD) {
> > > > > ...
> > > > > +} else {
> > > > > +/*
> > > > > + * mbuf still in use, nothing left to
> > > > > + * free.
> > > > > + */
> > > > > +break;
> > > > >
> > > > > It is not correct to expect that IXGBE_TXD_STAT_DD will be set
> > > > > on last TXD for
> > > > > *every* packet.
> > > > > We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> > > > > Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> > > > >
> > > > > So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> > > > > It would be much less error prone and will help to avoid code duplication.
> > > > >
> > > > > Konstantin
> > > > >
> > > >
> > > > At first.
> > > > The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will
> > > > cleanup
> > > TXD wb.status.
> > > > the number of status cleanuped is always txq->tx_rs_thresh.
> > >
> > > Yes, and what's wrong with it?
> > >
> > > >
> > > > The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that
> > > > @param free_cnt
> > > >  *   Maximum number of packets to free. Use 0 to indicate all possible
> packets
> > > >  *   should be freed. Note that a packet may be using multiple mbufs.
> > >
> > > I don't think it is a good approach, better would be to report
> > > number of freed mbufs, but ok, as it is a public API, we probably need to
> keep it as it is.
> > >
> > > > a number must be set while ixgbe_xmit_cleanup and
> > > > ixgbe_tx_free_bufs only
> > > have one parameter txq.
> > >
> > > Yes, ixgbe_xmit_cleanup() cleans up at least txq->tx_rs_thresh TXDs.
> > > So if user requested more packets to be freed we can call
> > > ixgbe_xmit_cleanup() in a loop.
> > >
> >
> > That is a great idea and I discuss with my workmate about it today.
> > there is also some Question that we don’t confirm.
> > Actually it can call ixgbe_xmit_cleanup() in a loop if user requested
> > more packets, How to deal with the MOD. For example:
> > The default tx_rs_thresh is 32.if the count of mbufs we need free is 50.
> > We can call ixgbe_xmit_cleanup() one time to free 32 mbufs.
> > Then how about other 18 mbufs.
> > 1.If we do nothing, it looks not good.
> > 2.if we call ixgbe_xmit_cleanup() successfully, we free 14 mbufs more.
> > 3.if we call ixgbe_xmit_cleanup() fail, the status of No.32 mbufs is not DD
> >   While No .18 is DD. So we do not free 18 mbufs what we can and should free.
> >
> > We have try some plans about it, likes add parameter for
> > ixgbe_xmit_cleanup(), change  Tx_rs_thresh or copy the code or
> > ixgbe_xmit_cleanup() as a new function with more Parameter. But all of them
> seem not perfect.
> >
> > So  can you give some comment about it? It seems not easy as we think by
> reuse function.
> 
> My thought about it:
> for situations when cnt % rs_thresh != 0 we'll still call ixgbe_xmit_cleanup() cnt /
> rs_thresh + 1 times.
> But then, we can free just cnt % rs_thresh mbufs, while keeping rest of them
> intact.
> 
> Let say at some moment we have txq->nb_tx_free==0, and user called
> tx_done_cleanup(txq, ..., cnt==50) So we call ixgbe_xmit_cleanup(txq) first time.
> Suppose it frees 32 TXDs, then we walk through corresponding sw_ring[] entries
> and let say free 32 packets (one mbuf per packet).
> Then we call ixgbe_xmit_cleanup(txq)  second time.
> Suppose it will free another 32 TXDs, we again walk thour sw_ring[], but free
> only first 18 mbufs and return.
> Suppose we call tx_done_cleanup(txq, cnt=50) immediately again.
> Now txq->nb_tx_free==64, so we can start to scan sw_entries[] from tx_tail
> straightway. We'll skip first 50 entries as they are already empty, then free
> remaining 14 mbufs, then will call ixgbe_xmit_cleanup(txq) again, and if it would
> be successful, will scan and free another 32 sw_ring[] entries.
> Then again will call ixgbe_xmit_cleanup(txq), but will free
> only first 8 available sw_ring[].mbuf.
> Probably a bit easier with the code:
> 
> tx_done_cleanup(..., uint32_t cnt)
> {
>        swr = txq->sw_ring;
>        txr     = txq->tx_ring;
>        id   = txq->tx_tail;
> 
>       if (txq->nb_tx_free == 0)
>          ixgbe_xmit_cleanup(txq);
> 
>       free =  txq->nb_tx_free;
>       for (n = 0; n < cnt && free != 0; ) {
> 
>           for (j = 0; j != free && n < cnt; j++) {
>              swe = &swr[id + j];
>              if (swe->mbuf != NULL) {
>                    rte_pktmbuf_free_seg(swe->mbuf);
>                    swe->mbuf = NULL;
> 
>                   /* last segment in the packet, increment packet count */
>                   n += (swe->last_id == id + j);
>              }
>           }
> 
>           if (n < cnt) {
>                ixgbe_xmit_cleanup(txq);
>                free =   txq->nb_tx_free - free;
>           }
>      }
>      return n;
> }
> 
> For the situation when there are less then rx_thresh free TXDs ((txq-
> >tx_ring[desc_to_clean_to].wb.status & IXGBE_TXD_STAT_DD) == 0) we do
> nothing - in that case we consider there are no more mbufs to free.
> 
> >
> >
> > > > And what should do is not only free buffers and status but also
> > > > check which bufs are from  One packet and count the packet freed.
> > >
> > > ixgbe_xmit_cleanup() itself doesn't free mbufs itself.
> > > It only cleans up TXDs.
> > > So in tx_done_cleanup() after calling ixgbe_xmit_cleanup() you'll
> > > still need to go through sw_ring[] entries that correspond to free TXDs and
> call mbuf_seg_free().
> > > You can count number of full packets here.
> > >
> > > > So I think it can't be implemented that reuse function
> > > > xmit_cleanup without
> > > change it.
> > > > And create a new function with the code of xmit_cleanup will cause
> > > > many
> > > duplication.
> > >
> > > I don't think it would.
> > > I think all we need here is something like that (note it is
> > > schematic one, doesn't take into accounf that TXD ring is circular):
> > >
> > > tx_done_cleanup(..., uint32_t cnt)
> > > {
> > >       /* we have txq->nb_tx_free TXDs starting from txq->tx_tail.
> > >            Scan them first and free as many mbufs as we can.
> > >            If we need more mbufs to free call  ixgbe_xmit_cleanup()
> > >            to free more TXDs. */
> > >
> > >        swr = txq->sw_ring;
> > >        txr     = txq->tx_ring;
> > >        id   = txq->tx_tail;
> > >        free =  txq->nb_tx_free;
> > >
> > >        for (n = 0; n < cnt && free != 0; ) {
> > >
> > >           for (j = 0; j != free && n < cnt; j++) {
> > >              swe = &swr[id + j];
> > >              if (swe->mbuf != NULL) {
> > >                    rte_pktmbuf_free_seg(swe->mbuf);
> > >                    swe->mbuf = NULL;
> > >              }
> > >              n += (swe->last_id == id + j)
> > >           }
> > >
> > >           if (n < cnt) {
> > >                ixgbe_xmit_cleanup(txq);
> > >                free =   txq->nb_tx_free - free;
> > >           }
> > >      }
> > >      return n;
> > > }
> > >
> > > >
> > > > Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().
> > >
> > > Totally disagree, see above.
> > >
> > > >
> > > > Second.
> > > > The function in patch is copy from code in igb_rxtx.c. it already
> > > > updated in 2017, The commit id is
> > > 8d907d2b79f7a54c809f1c44970ff455fa2865e1.
> > >
> > > I realized that.
> > > But I think it as a problem, not a positive thing.
> > > While they do have some similarities, igb abd ixgbe are PMDs for
> > > different devices, and their TX code differs quite a lot. Let say
> > > igb doesn't use tx_rs_threshold, but instead set RS bit for each last TXD.
> > > So, just blindly copying tx_done_cleanup() from igb to ixgbe doesn't
> > > look like a good idea to me.
> > >
> > > > I trust the logic of code is right.
> > > > Actually it don't complete for ixgbe, i40e and ice, while it don't
> > > > change the value of last_desc_cleaned and tx_next_dd. And it's
> > > > beginning prefer last_desc_cleaned or  tx_next_dd(for offload or
> > > > simple) to
> > > tx_tail.
> > > >
> > > > So, I suggest to use the old function and fix the issue.
> > > >
> > > > > >
> > > > > >
> > > > > > > >This is the first packet that will be
> > > > > > > > + * attempted to be freed.
> > > > > > > > + */
> > > > > > > > +
> > > > > > > > +/* Get last segment in most recently added packet. */
> > > > > > > > +tx_last = sw_ring[txq->tx_tail].last_id;
> > > > > > > > +
> > > > > > > > +/* Get the next segment, which is the oldest segment in ring.
> > > > > > > > +*/ tx_first = sw_ring[tx_last].next_id;
> > > > > > > > +
> > > > > > > > +/* Set the current index to the first. */ tx_id =
> > > > > > > > +tx_first;
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * Loop through each packet. For each packet, verify that
> > > > > > > > +an
> > > > > > > > + * mbuf exists and that the last segment is free. If so,
> > > > > > > > +free
> > > > > > > > + * it and move on.
> > > > > > > > + */
> > > > > > > > +while (1) {
> > > > > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > > > > +
> > > > > > > > +if (sw_ring[tx_last].mbuf) { if (!(txr[tx_last].wb.status
> > > > > > > > +&
> > > > > > > > +IXGBE_TXD_STAT_DD))
> > > > > > > > +break;
> > > > > > > > +
> > > > > > > > +/* Get the start of the next packet. */ tx_next =
> > > > > > > > +sw_ring[tx_last].next_id;
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * Loop through all segments in a
> > > > > > > > + * packet.
> > > > > > > > + */
> > > > > > > > +do {
> > > > > > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > > > > > +sw_ring[tx_id].mbuf = NULL; sw_ring[tx_id].last_id =
> > > > > > > > +tx_id;
> > > > > > > > +
> > > > > > > > +/* Move to next segment. */ tx_id =
> > > > > > > > +sw_ring[tx_id].next_id;
> > > > > > > > +
> > > > > > > > +} while (tx_id != tx_next);
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * Increment the number of packets
> > > > > > > > + * freed.
> > > > > > > > + */
> > > > > > > > +count++;
> > > > > > > > +
> > > > > > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > > > > > +/*
> > > > > > > > + * There are multiple reasons to be here:
> > > > > > > > + * 1) All the packets on the ring have been
> > > > > > > > + *    freed - tx_id is equal to tx_first
> > > > > > > > + *    and some packets have been freed.
> > > > > > > > + *    - Done, exit
> > > > > > > > + * 2) Interfaces has not sent a rings worth of
> > > > > > > > + *    packets yet, so the segment after tail is
> > > > > > > > + *    still empty. Or a previous call to this
> > > > > > > > + *    function freed some of the segments but
> > > > > > > > + *    not all so there is a hole in the list.
> > > > > > > > + *    Hopefully this is a rare case.
> > > > > > > > + *    - Walk the list and find the next mbuf. If
> > > > > > > > + *      there isn't one, then done.
> > > > > > > > + */
> > > > > > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * Walk the list and find the next mbuf, if any.
> > > > > > > > + */
> > > > > > > > +do {
> > > > > > > > +/* Move to next segment. */ tx_id =
> > > > > > > > +sw_ring[tx_id].next_id;
> > > > > > > > +
> > > > > > > > +if (sw_ring[tx_id].mbuf)
> > > > > > > > +break;
> > > > > > > > +
> > > > > > > > +} while (tx_id != tx_first);
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * Determine why previous loop bailed. If there
> > > > > > > > + * is not an mbuf, done.
> > > > > > > > + */
> > > > > > > > +if (sw_ring[tx_id].mbuf == NULL) break; } }
> > > > > > > > +
> > > > > > > > +return count;
> > > > > > > > +}
> > > > > > > > +
> > > > > > > >  static void __attribute__((cold))
> > > > > > > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff
> > > > > > > > --git a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > b/drivers/net/ixgbe/ixgbe_rxtx.h index
> > > > > > > > 505d344b9..2c3770af6
> > > > > > > > 100644
> > > > > > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > @@ -285,6 +285,8 @@ int
> > > > > > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > > > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct
> > > > > > > > ixgbe_rx_queue *rxq);  void
> > > > > > > > ixgbe_rx_queue_release_mbufs_vec(struct
> > > > > > > > ixgbe_rx_queue *rxq);
> > > > > > > >
> > > > > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > > > > > +
> > > > > > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > > > > > >  extern const uint32_t
> > > > > > > > ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.17.1
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-08 10:15                   ` Di, ChenxuX
@ 2020-01-08 15:12                     ` Ananyev, Konstantin
  0 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-08 15:12 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming


Hi Chenxu,
 
> Thanks for your read.
> 
> for our research, we don't think it is a good plan that reuse ixgbe_xmit_cleanup() or ixgbe_tx_free_bufs.
> following two opinion will show the reason.

I think there is a main misunderstandings between us:

TXD.DD bit setting.
You expect that for every completed packet packet HW will set DD bit.
For ixgbe it is not the case.
For ixgbe (and AFAIK for i40e) we don't set RS bit for every packet.
Instead we set it for a bunch of packets: tx_rs_thresh value.
That's one of the optimizations ixgbe PMD does.
So valid assumption is to check DD bit only for TXDs where RS bit is set.
That's how both ixgbe_xmit_cleanup() or ixgbe_tx_free_bufs() works.
All TXDs between last_desc_cleaned and desc_to_clean_to
(or between tx_next_dd and tx_next_dd + tx_rs_thresh) are considered 
by PMD as 'still busy', even in reality some of them might already be not. 

Yes, it means that in some cases some of them mbufs will be considered
by PMD as still in use, while they are really not.
But I don't think it is a critical thing.
Having TXD and related mbufs management logic within PMD
consistent is much more important.
 
In other words, I still think reusing exiting functions
(ixgbe_xmit_cleanup() and ixgbe_tx_free_bufs) for tx_cleanup_done()
is a better option then trying to create new ones. 

BTW, as a side question, do you guys have any vehicle to test these functions?
AFAIK, right now this function is not called from any app:
find  app examples/ -type f -name '*.[h,c]' | xargs grep -l rte_eth_tx_done_cleanup
<empty>

> 
> first, it doesn't solve the isituations when cnt % rs_thresh != 0 even by using your plan.
> 
> For example, the parameters of API rte_eth_tx_done_cleanup free_cnt=0, that means we want to free all possible mbufs(one mbuf per
> packet). we find it after checking that
> No.18 mbuf status is DD, and the status of mbuf after No.19 is in use.
> 
> If the plan is that call ixgbe_xmit_cleanup() cnt /rs_thresh + 1 times( 1 in this example).
> After ixgbe_xmit_cleanup()  the value of last_desc_cleaned is not same as the actual.
> the function ixgbe_xmit_pkts will call ixgbe_xmit_cleanup automatically when nb_tx_free<tx_free_thresh, mbufs before last_desc_cleaned
> (No.19~No.31 for this example)will not be freed
> 
> however, all of above is for ixgbe_xmit_pkts() and ixgbe_xmit_cleanup(). if that for ixgbe_xmit_pkts_simple() and ixgbe_tx_free_bufs(),it
> may be worse.
> because the free buff action is in the function ixgbe_tx_free_bufs, we won't  do free in the outside code. And  no mbufs will be freed.
> 
> If you do free in outside code for the MOD mbufs and update the value of tx_next_dd and nb_tx_free. there is no need to call
> ixgbe_tx_free_bufs.
> 
> here is two case about tx_done_cleanup and ixgbe_tx_free_bufs. it may be more detail
> 
> Case 1
> Status:
> 	Suppose before calling tx_done_cleanup(..., uint32_t cnt), txq status are as followings.
> 
> 	one mbuf per packet
> 	txq->tx_rs_thresh = 32;
> 	clean mbuf from index id
> 	txr[id+9].wb.status & IXGBE_TXD_STAT_DD != 0.   // means we cloud free 10 packets
> 	txr[id+31].wb.status & IXGBE_TXD_STAT_DD == 0.  // means we cloud not clean 32 packets
> 
> Process Flow:
> 	tx_done_cleanup(..., 10) be invoked to free 10 packets.
> 	Firstly ,tx_done_cleanup will invoke ixgbe_xmit_cleanup to count how many mbufs could free.
> 	But ixgbe_xmit_cleanup will return -1 (means no mbufs to free), please ref code bellow
> 
> 	status = txr[desc_to_clean_to].wb.status;
> 	if (!(status & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) {
> 		PMD_TX_FREE_LOG(DEBUG,
> 				"TX descriptor %4u is not done"
> 				"(port=%d queue=%d)",
> 				desc_to_clean_to,
> 				txq->port_id, txq->queue_id);
> 		/* Failed to clean any descriptors, better luck next time */
> 		return -(1);
> 	}
> 
> Result:
> 	We do nothing
> 
> Expect:
> 	Free 10 packets.
> 
> Thoughts:
> 	If we try to check status from txr[id].wb.status to txr[id+31].wb.status one by one, we could find txr[id].wb.status,
> txr[id+1].wb.status, txr[id+2].wb.status ……txr[id+9].wb.status, all of them status are IXGBE_TXD_STAT_DD, so actually we have the ability
> to free 10 packets.
> 
> 
> 
> Case 2:
> Status:
> 	Suppose before calling tx_done_cleanup(..., uint32_t cnt), txq status are as followings.
> 
> 	one mbuf per packet
> 	txq->tx_rs_thresh = 32;
> 	clean mbuf from index id
> 	txr[id+31].wb.status & IXGBE_TXD_STAT_DD != 0.   // means we cloud free 32 packets
> 
> Process Flow:
> 	tx_done_cleanup(..., 10) be invoked to free 10 packets.
> 	When tx_done_cleanup invoke ixgbe_tx_free_bufs free bufs, it will free 32 packets..
> 
> Result:
> 	Free 32 packets.
> 
> Expect:
> 	Free 10 packets.
> 
> Thoughts:
> 	If we try to check status from txr[id].wb.status to txr[id+31].wb.status one by one, we could find txr[id].wb.status,
> txr[id+1].wb.status, txr[id+2].wb.status ……txr[id+10].wb.status, all of them status are IXGBE_TXD_STAT_DD, so we have the ability to free
> 10 packets only.
> 
> 
> 
> 
>  second, we have a lot of codes what is same as it in function ixgbe_xmit_cleanup.
> 
> 
>  we do analysis for function ixgbe_tx_free_bufs and ixgbe_xmit_cleanup and segment their actions.
>  the actual action of codes are following points:
> 1. Determine the start position of the free action.
> 2. Check whether the status of the end position is DD
> 3. Free ( set status=0 in xmit_cleanup() while call rte_mempool_put_bulk() in tx_free_bufs())
> 4. Update location variables( last_desc_cleaned in xmit_cleanup() and  tx_next_dd  in tx_free_bufs() and nb_tx_free in both).
> 
> If reuse ixgbe_xmit_cleanup, we need get the number of mbufs before calling ixgbe_xmit_cleanup()
> We also need to implement the following actions:
> 1. Determine the starting position
> 2. Find the last mbuf of each PKT and confirm whether its status is DD
> 3. Free (don't do for tx_free_bufs())
> 4. call xmit_cleanup function in loop.
> 
> By comparison, it is possible to see that 3/4 functions of ixgbe_xmit_cleanup are already being called before,
> it means that ixgbe_xmit_cleanup is not highly reusable, the repeat codes is so many.
> 
> 
> following code is the main code in ixgbe_xmit_cleanup() and the action code.
> 
>  1.
> 	last_desc_cleaned = txq->last_desc_cleaned;
> 
>  2.
> 	/* Determine the last descriptor needing to be cleaned */
> 	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
> 	if (desc_to_clean_to >= nb_tx_desc)
> 		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
> 
> 	/* Check to make sure the last descriptor to clean is done */
> 	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> 	status = txr[desc_to_clean_to].wb.status;
> 	if (!(status & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) {
> 		PMD_TX_FREE_LOG(DEBUG,
> 		return -(1);
> 	}
> 3.
> 	/* Figure out how many descriptors will be cleaned */
> 	if (last_desc_cleaned > desc_to_clean_to)
> 		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
> 							desc_to_clean_to);
> 	else
> 		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
> 						last_desc_cleaned);
> 	txr[desc_to_clean_to].wb.status = 0;
> 4.
> 	/* Update the txq to reflect the last descriptor that was cleaned */
> 	txq->last_desc_cleaned = desc_to_clean_to;
> 	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
> 
> 
> 
> 
> 
> 
> 
> 
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Tuesday, January 7, 2020 10:10 PM
> > To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v6 3/4] net/ixgbe: cleanup Tx buffers
> >
> >
> > Hi Chenxu,
> >
> > > > > > > > > + * tx_tail is the last sent packet on the sw_ring. Goto
> > > > > > > > > + the end
> > > > > > > > > + * of that packet (the last segment in the packet chain)
> > > > > > > > > + and
> > > > > > > > > + * then the next segment will be the start of the oldest
> > > > > > > > > + segment
> > > > > > > > > + * in the sw_ring.
> > > > > > > >
> > > > > > > > Not sure I understand the sentence above.
> > > > > > > > tx_tail is the value of TDT HW register (most recently armed by SW
> > TD).
> > > > > > > > last_id  is the index of last descriptor for multi-seg packet.
> > > > > > > > next_id is just the index of next descriptor in HW TD ring.
> > > > > > > > How do you conclude that it will be the ' oldest segment in the
> > sw_ring'?
> > > > > > > >
> > > > > > >
> > > > > > > The tx_tail is the last sent packet on the sw_ring. While the
> > > > > > > xmit_cleanup or Tx_free_bufs will be call when the nb_tx_free
> > > > > > > <
> > > > > > tx_free_thresh .
> > > > > > > So the sw_ring[tx_tail].next_id must be the begin of mbufs
> > > > > > > which are not used or  Already freed . then begin the loop
> > > > > > > until the mbuf is used and
> > > > > > begin to free them.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > Another question why do you need to write your own functions?
> > > > > > > > Why can't you reuse existing ixgbe_xmit_cleanup() for
> > > > > > > > full(offload) path and
> > > > > > > > ixgbe_tx_free_bufs() for simple path?
> > > > > > > > Yes,  ixgbe_xmit_cleanup() doesn't free mbufs, but at least
> > > > > > > > it could be used to determine finished TX descriptors.
> > > > > > > > Based on that you can you can free appropriate sw_ring[] entries.
> > > > > > > >
> > > > > > >
> > > > > > > The reason why I don't reuse existing function is that they
> > > > > > > all free several mbufs While the free_cnt of the API
> > > > > > > rte_eth_tx_done_cleanup() is the
> > > > > > number of packets.
> > > > > > > It also need to be done that check which mbuffs are from the same
> > packet.
> > > > > >
> > > > > > At first, I don't see anything bad if tx_done_cleanup() will
> > > > > > free only some segments from the packet. As long as it is safe -
> > > > > > there is no
> > > > problem with that.
> > > > > > I think rte_eth_tx_done_cleanup() operates on mbuf, not packet
> > quantities.
> > > > > > But in our case I think it doesn't matter, as
> > > > > > ixgbe_xmit_cleanup() mark TXDs as free only when HW is done with all
> > TXDs for that packet.
> > > > > > As long as there is a way to reuse existing code and avoid
> > > > > > duplication (without introducing any degradation) - we should use it.
> > > > > > And I think there is a very good opportunity here to reuse
> > > > > > existing
> > > > > > ixgbe_xmit_cleanup() for tx_done_cleanup() implementation.
> > > > > > Moreover because your code doesn’t follow
> > > > > > ixgbe_xmit_pkts()/ixgbe_xmit_cleanup()
> > > > > > logic and infrastructure, it introduces unnecessary scans over
> > > > > > TXD ring, and in some cases doesn't work as expected:
> > > > > >
> > > > > > +while (1) {
> > > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > > +
> > > > > > +if (sw_ring[tx_last].mbuf) {
> > > > > > +if (txr[tx_last].wb.status &
> > > > > > +IXGBE_TXD_STAT_DD) {
> > > > > > ...
> > > > > > +} else {
> > > > > > +/*
> > > > > > + * mbuf still in use, nothing left to
> > > > > > + * free.
> > > > > > + */
> > > > > > +break;
> > > > > >
> > > > > > It is not correct to expect that IXGBE_TXD_STAT_DD will be set
> > > > > > on last TXD for
> > > > > > *every* packet.
> > > > > > We set IXGBE_TXD_CMD_RS bit only on threshold packet last descriptor.
> > > > > > Plus ixgbe_xmit_cleanup() can cleanup TXD wb.status.
> > > > > >
> > > > > > So I strongly recommend to reuse ixgbe_xmit_cleanup() here.
> > > > > > It would be much less error prone and will help to avoid code duplication.
> > > > > >
> > > > > > Konstantin
> > > > > >
> > > > >
> > > > > At first.
> > > > > The function ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) will
> > > > > cleanup
> > > > TXD wb.status.
> > > > > the number of status cleanuped is always txq->tx_rs_thresh.
> > > >
> > > > Yes, and what's wrong with it?
> > > >
> > > > >
> > > > > The  API rte_eth_tx_done_cleanup() in rte_eth_dev.h show that
> > > > > @param free_cnt
> > > > >  *   Maximum number of packets to free. Use 0 to indicate all possible
> > packets
> > > > >  *   should be freed. Note that a packet may be using multiple mbufs.
> > > >
> > > > I don't think it is a good approach, better would be to report
> > > > number of freed mbufs, but ok, as it is a public API, we probably need to
> > keep it as it is.
> > > >
> > > > > a number must be set while ixgbe_xmit_cleanup and
> > > > > ixgbe_tx_free_bufs only
> > > > have one parameter txq.
> > > >
> > > > Yes, ixgbe_xmit_cleanup() cleans up at least txq->tx_rs_thresh TXDs.
> > > > So if user requested more packets to be freed we can call
> > > > ixgbe_xmit_cleanup() in a loop.
> > > >
> > >
> > > That is a great idea and I discuss with my workmate about it today.
> > > there is also some Question that we don’t confirm.
> > > Actually it can call ixgbe_xmit_cleanup() in a loop if user requested
> > > more packets, How to deal with the MOD. For example:
> > > The default tx_rs_thresh is 32.if the count of mbufs we need free is 50.
> > > We can call ixgbe_xmit_cleanup() one time to free 32 mbufs.
> > > Then how about other 18 mbufs.
> > > 1.If we do nothing, it looks not good.
> > > 2.if we call ixgbe_xmit_cleanup() successfully, we free 14 mbufs more.
> > > 3.if we call ixgbe_xmit_cleanup() fail, the status of No.32 mbufs is not DD
> > >   While No .18 is DD. So we do not free 18 mbufs what we can and should free.
> > >
> > > We have try some plans about it, likes add parameter for
> > > ixgbe_xmit_cleanup(), change  Tx_rs_thresh or copy the code or
> > > ixgbe_xmit_cleanup() as a new function with more Parameter. But all of them
> > seem not perfect.
> > >
> > > So  can you give some comment about it? It seems not easy as we think by
> > reuse function.
> >
> > My thought about it:
> > for situations when cnt % rs_thresh != 0 we'll still call ixgbe_xmit_cleanup() cnt /
> > rs_thresh + 1 times.
> > But then, we can free just cnt % rs_thresh mbufs, while keeping rest of them
> > intact.
> >
> > Let say at some moment we have txq->nb_tx_free==0, and user called
> > tx_done_cleanup(txq, ..., cnt==50) So we call ixgbe_xmit_cleanup(txq) first time.
> > Suppose it frees 32 TXDs, then we walk through corresponding sw_ring[] entries
> > and let say free 32 packets (one mbuf per packet).
> > Then we call ixgbe_xmit_cleanup(txq)  second time.
> > Suppose it will free another 32 TXDs, we again walk thour sw_ring[], but free
> > only first 18 mbufs and return.
> > Suppose we call tx_done_cleanup(txq, cnt=50) immediately again.
> > Now txq->nb_tx_free==64, so we can start to scan sw_entries[] from tx_tail
> > straightway. We'll skip first 50 entries as they are already empty, then free
> > remaining 14 mbufs, then will call ixgbe_xmit_cleanup(txq) again, and if it would
> > be successful, will scan and free another 32 sw_ring[] entries.
> > Then again will call ixgbe_xmit_cleanup(txq), but will free
> > only first 8 available sw_ring[].mbuf.
> > Probably a bit easier with the code:
> >
> > tx_done_cleanup(..., uint32_t cnt)
> > {
> >        swr = txq->sw_ring;
> >        txr     = txq->tx_ring;
> >        id   = txq->tx_tail;
> >
> >       if (txq->nb_tx_free == 0)
> >          ixgbe_xmit_cleanup(txq);
> >
> >       free =  txq->nb_tx_free;
> >       for (n = 0; n < cnt && free != 0; ) {
> >
> >           for (j = 0; j != free && n < cnt; j++) {
> >              swe = &swr[id + j];
> >              if (swe->mbuf != NULL) {
> >                    rte_pktmbuf_free_seg(swe->mbuf);
> >                    swe->mbuf = NULL;
> >
> >                   /* last segment in the packet, increment packet count */
> >                   n += (swe->last_id == id + j);
> >              }
> >           }
> >
> >           if (n < cnt) {
> >                ixgbe_xmit_cleanup(txq);
> >                free =   txq->nb_tx_free - free;
> >           }
> >      }
> >      return n;
> > }
> >
> > For the situation when there are less then rx_thresh free TXDs ((txq-
> > >tx_ring[desc_to_clean_to].wb.status & IXGBE_TXD_STAT_DD) == 0) we do
> > nothing - in that case we consider there are no more mbufs to free.
> >
> > >
> > >
> > > > > And what should do is not only free buffers and status but also
> > > > > check which bufs are from  One packet and count the packet freed.
> > > >
> > > > ixgbe_xmit_cleanup() itself doesn't free mbufs itself.
> > > > It only cleans up TXDs.
> > > > So in tx_done_cleanup() after calling ixgbe_xmit_cleanup() you'll
> > > > still need to go through sw_ring[] entries that correspond to free TXDs and
> > call mbuf_seg_free().
> > > > You can count number of full packets here.
> > > >
> > > > > So I think it can't be implemented that reuse function
> > > > > xmit_cleanup without
> > > > change it.
> > > > > And create a new function with the code of xmit_cleanup will cause
> > > > > many
> > > > duplication.
> > > >
> > > > I don't think it would.
> > > > I think all we need here is something like that (note it is
> > > > schematic one, doesn't take into accounf that TXD ring is circular):
> > > >
> > > > tx_done_cleanup(..., uint32_t cnt)
> > > > {
> > > >       /* we have txq->nb_tx_free TXDs starting from txq->tx_tail.
> > > >            Scan them first and free as many mbufs as we can.
> > > >            If we need more mbufs to free call  ixgbe_xmit_cleanup()
> > > >            to free more TXDs. */
> > > >
> > > >        swr = txq->sw_ring;
> > > >        txr     = txq->tx_ring;
> > > >        id   = txq->tx_tail;
> > > >        free =  txq->nb_tx_free;
> > > >
> > > >        for (n = 0; n < cnt && free != 0; ) {
> > > >
> > > >           for (j = 0; j != free && n < cnt; j++) {
> > > >              swe = &swr[id + j];
> > > >              if (swe->mbuf != NULL) {
> > > >                    rte_pktmbuf_free_seg(swe->mbuf);
> > > >                    swe->mbuf = NULL;
> > > >              }
> > > >              n += (swe->last_id == id + j)
> > > >           }
> > > >
> > > >           if (n < cnt) {
> > > >                ixgbe_xmit_cleanup(txq);
> > > >                free =   txq->nb_tx_free - free;
> > > >           }
> > > >      }
> > > >      return n;
> > > > }
> > > >
> > > > >
> > > > > Above all , it seem not a perfect idea to reuse ixgbe_xmit_cleanup().
> > > >
> > > > Totally disagree, see above.
> > > >
> > > > >
> > > > > Second.
> > > > > The function in patch is copy from code in igb_rxtx.c. it already
> > > > > updated in 2017, The commit id is
> > > > 8d907d2b79f7a54c809f1c44970ff455fa2865e1.
> > > >
> > > > I realized that.
> > > > But I think it as a problem, not a positive thing.
> > > > While they do have some similarities, igb abd ixgbe are PMDs for
> > > > different devices, and their TX code differs quite a lot. Let say
> > > > igb doesn't use tx_rs_threshold, but instead set RS bit for each last TXD.
> > > > So, just blindly copying tx_done_cleanup() from igb to ixgbe doesn't
> > > > look like a good idea to me.
> > > >
> > > > > I trust the logic of code is right.
> > > > > Actually it don't complete for ixgbe, i40e and ice, while it don't
> > > > > change the value of last_desc_cleaned and tx_next_dd. And it's
> > > > > beginning prefer last_desc_cleaned or  tx_next_dd(for offload or
> > > > > simple) to
> > > > tx_tail.
> > > > >
> > > > > So, I suggest to use the old function and fix the issue.
> > > > >
> > > > > > >
> > > > > > >
> > > > > > > > >This is the first packet that will be
> > > > > > > > > + * attempted to be freed.
> > > > > > > > > + */
> > > > > > > > > +
> > > > > > > > > +/* Get last segment in most recently added packet. */
> > > > > > > > > +tx_last = sw_ring[txq->tx_tail].last_id;
> > > > > > > > > +
> > > > > > > > > +/* Get the next segment, which is the oldest segment in ring.
> > > > > > > > > +*/ tx_first = sw_ring[tx_last].next_id;
> > > > > > > > > +
> > > > > > > > > +/* Set the current index to the first. */ tx_id =
> > > > > > > > > +tx_first;
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Loop through each packet. For each packet, verify that
> > > > > > > > > +an
> > > > > > > > > + * mbuf exists and that the last segment is free. If so,
> > > > > > > > > +free
> > > > > > > > > + * it and move on.
> > > > > > > > > + */
> > > > > > > > > +while (1) {
> > > > > > > > > +tx_last = sw_ring[tx_id].last_id;
> > > > > > > > > +
> > > > > > > > > +if (sw_ring[tx_last].mbuf) { if (!(txr[tx_last].wb.status
> > > > > > > > > +&
> > > > > > > > > +IXGBE_TXD_STAT_DD))
> > > > > > > > > +break;
> > > > > > > > > +
> > > > > > > > > +/* Get the start of the next packet. */ tx_next =
> > > > > > > > > +sw_ring[tx_last].next_id;
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Loop through all segments in a
> > > > > > > > > + * packet.
> > > > > > > > > + */
> > > > > > > > > +do {
> > > > > > > > > +rte_pktmbuf_free_seg(sw_ring[tx_id].mbuf);
> > > > > > > > > +sw_ring[tx_id].mbuf = NULL; sw_ring[tx_id].last_id =
> > > > > > > > > +tx_id;
> > > > > > > > > +
> > > > > > > > > +/* Move to next segment. */ tx_id =
> > > > > > > > > +sw_ring[tx_id].next_id;
> > > > > > > > > +
> > > > > > > > > +} while (tx_id != tx_next);
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Increment the number of packets
> > > > > > > > > + * freed.
> > > > > > > > > + */
> > > > > > > > > +count++;
> > > > > > > > > +
> > > > > > > > > +if (unlikely(count == (int)free_cnt)) break; } else {
> > > > > > > > > +/*
> > > > > > > > > + * There are multiple reasons to be here:
> > > > > > > > > + * 1) All the packets on the ring have been
> > > > > > > > > + *    freed - tx_id is equal to tx_first
> > > > > > > > > + *    and some packets have been freed.
> > > > > > > > > + *    - Done, exit
> > > > > > > > > + * 2) Interfaces has not sent a rings worth of
> > > > > > > > > + *    packets yet, so the segment after tail is
> > > > > > > > > + *    still empty. Or a previous call to this
> > > > > > > > > + *    function freed some of the segments but
> > > > > > > > > + *    not all so there is a hole in the list.
> > > > > > > > > + *    Hopefully this is a rare case.
> > > > > > > > > + *    - Walk the list and find the next mbuf. If
> > > > > > > > > + *      there isn't one, then done.
> > > > > > > > > + */
> > > > > > > > > +if (likely(tx_id == tx_first && count != 0)) break;
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Walk the list and find the next mbuf, if any.
> > > > > > > > > + */
> > > > > > > > > +do {
> > > > > > > > > +/* Move to next segment. */ tx_id =
> > > > > > > > > +sw_ring[tx_id].next_id;
> > > > > > > > > +
> > > > > > > > > +if (sw_ring[tx_id].mbuf)
> > > > > > > > > +break;
> > > > > > > > > +
> > > > > > > > > +} while (tx_id != tx_first);
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * Determine why previous loop bailed. If there
> > > > > > > > > + * is not an mbuf, done.
> > > > > > > > > + */
> > > > > > > > > +if (sw_ring[tx_id].mbuf == NULL) break; } }
> > > > > > > > > +
> > > > > > > > > +return count;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > >  static void __attribute__((cold))
> > > > > > > > > ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { diff
> > > > > > > > > --git a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > > b/drivers/net/ixgbe/ixgbe_rxtx.h index
> > > > > > > > > 505d344b9..2c3770af6
> > > > > > > > > 100644
> > > > > > > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > > > > > > > @@ -285,6 +285,8 @@ int
> > > > > > > > > ixgbe_rx_vec_dev_conf_condition_check(struct
> > > > > > > > > rte_eth_dev *dev);  int ixgbe_rxq_vec_setup(struct
> > > > > > > > > ixgbe_rx_queue *rxq);  void
> > > > > > > > > ixgbe_rx_queue_release_mbufs_vec(struct
> > > > > > > > > ixgbe_rx_queue *rxq);
> > > > > > > > >
> > > > > > > > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> > > > > > > > > +
> > > > > > > > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > > > > > > > >  extern const uint32_t
> > > > > > > > > ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v7 0/4] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (7 preceding siblings ...)
  2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
@ 2020-01-09 10:38 ` Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 1/4] net/i40e: " Chenxu Di
                     ` (3 more replies)
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
                   ` (2 subsequent siblings)
  11 siblings, 4 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-09 10:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding i40e, ice, ixgbe 
and igb vf for the API rte_eth_tx_done_cleanup to force
 free consumed buffers on Tx ring.

---
v7:
changed the design of code, reuse exist function.

Chenxu Di (4):
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev.c    |   3 +
 drivers/net/i40e/i40e_ethdev_vf.c |   3 +
 drivers/net/i40e/i40e_rxtx.c      | 151 +++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   8 ++
 drivers/net/ice/ice_ethdev.c      |   3 +
 drivers/net/ice/ice_rxtx.c        | 155 +++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |  11 +++
 drivers/net/ixgbe/ixgbe_ethdev.c  |   4 +
 drivers/net/ixgbe/ixgbe_rxtx.c    | 156 +++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_rxtx.h    |  10 ++
 11 files changed, 504 insertions(+), 1 deletion(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v7 1/4] net/i40e: cleanup Tx buffers
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
@ 2020-01-09 10:38   ` Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 2/4] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-09 10:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   3 +
 drivers/net/i40e/i40e_ethdev_vf.c |   3 +
 drivers/net/i40e/i40e_rxtx.c      | 151 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   8 ++
 4 files changed, 165 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..e0b071891 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
@@ -1358,6 +1359,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	dev->tx_pkt_burst = i40e_xmit_pkts;
 	dev->tx_pkt_prepare = i40e_prep_pkts;
 
+	i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_scalar);
+
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work. Only check we don't need a different
 	 * RX function */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..3dcc9434c 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
@@ -1473,6 +1474,8 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->rx_pkt_burst = &i40e_recv_pkts;
 	eth_dev->tx_pkt_burst = &i40e_xmit_pkts;
 
+	i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_scalar);
+
 	/*
 	 * For secondary processes, we don't initialise any further as primary
 	 * has already done this work.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..dfbca06b6 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,154 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+static i40e_tx_done_cleanup_t i40e_tx_done_cleanup_op;
+
+int
+i40e_tx_done_cleanup_scalar(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct i40e_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0)
+		if (i40e_xmit_cleanup(txq))
+			return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_last || txq->tx_rs_thresh
+			> txq->nb_tx_desc - txq->nb_tx_free)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (i40e_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u Packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint16_t i;
+	uint16_t tx_first;
+	uint16_t tx_id;
+	uint32_t pkt_cnt;
+	struct i40e_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from tx_first */
+	tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
+	tx_id  = tx_first;
+
+	/* while free_cnt is 0,
+	 * suppose one mbuf per packet,
+	 * try to free packets as many as possible
+	 */
+	if (free_cnt == 0)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count freeable packets */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		if (!i40e_tx_free_bufs(txq))
+			break;
+
+		for (i = 0; i != txq->tx_rs_thresh &&
+			tx_id != tx_first; i++) {
+			/* last segment in the packet,
+			 * increment packet count
+			 */
+			pkt_cnt += (tx_id == swr_ring[tx_id].last_id);
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_first)
+			break;
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+int
+i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	i40e_tx_done_cleanup_t func = i40e_get_tx_done_cleanup_func();
+
+	if (!func)
+		return -ENOTSUP;
+
+	return func(txq, free_cnt);
+}
+
+void
+i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_t fn)
+{
+	i40e_tx_done_cleanup_op = fn;
+}
+
+i40e_tx_done_cleanup_t
+i40e_get_tx_done_cleanup_func(void)
+{
+	return i40e_tx_done_cleanup_op;
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
@@ -3139,15 +3287,18 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
 			else
 				dev->tx_pkt_burst =
 					i40e_get_recommend_tx_vec();
+			i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_vec);
 		} else {
 			PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
 			dev->tx_pkt_burst = i40e_xmit_pkts_simple;
+			i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_simple);
 		}
 		dev->tx_pkt_prepare = NULL;
 	} else {
 		PMD_INIT_LOG(DEBUG, "Xmit tx finally be used.");
 		dev->tx_pkt_burst = i40e_xmit_pkts;
 		dev->tx_pkt_prepare = i40e_prep_pkts;
+		i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_scalar);
 	}
 }
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..ab2c0ffd0 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -173,6 +173,8 @@ union i40e_tx_offload {
 		uint64_t outer_l3_len:16; /**< outer L3 Header Length */
 	};
 };
+typedef int (*i40e_tx_done_cleanup_t)(struct i40e_tx_queue *txq,
+				uint32_t free_cnt);
 
 int i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
@@ -212,6 +214,12 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+void i40e_set_tx_done_cleanup_func(i40e_tx_done_cleanup_t fn);
+i40e_tx_done_cleanup_t i40e_get_tx_done_cleanup_func(void);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_scalar(struct i40e_tx_queue *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v7 2/4] net/ice: cleanup Tx buffers
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 1/4] net/i40e: " Chenxu Di
@ 2020-01-09 10:38   ` Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 3/4] net/ixgbe: " Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 4/4] net/e1000: " Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-09 10:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   3 +
 drivers/net/ice/ice_rxtx.c   | 155 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |  11 +++
 3 files changed, 169 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..3d586fede 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
@@ -2137,6 +2138,8 @@ ice_dev_init(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = ice_xmit_pkts;
 	dev->tx_pkt_prepare = ice_prep_pkts;
 
+	ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_scalar);
+
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work.
 	 */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..db531d0fc 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,9 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
@@ -2643,6 +2646,155 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	return txq->tx_rs_thresh;
 }
 
+static ice_tx_done_cleanup_t ice_tx_done_cleanup_op;
+
+int
+ice_tx_done_cleanup_scalar(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct ice_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0)
+		if (ice_xmit_cleanup(txq))
+			return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_last || txq->tx_rs_thresh
+			> txq->nb_tx_desc - txq->nb_tx_free)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ice_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u Packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+int
+ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint16_t i;
+	uint16_t tx_first;
+	uint16_t tx_id;
+	uint32_t pkt_cnt;
+	struct ice_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from tx_first */
+	tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
+	tx_id  = tx_first;
+
+	/* while free_cnt is 0,
+	 * suppose one mbuf per packet,
+	 * try to free packets as many as possible
+	 */
+	if (free_cnt == 0)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count freeable packets */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		if (!ice_tx_free_bufs(txq))
+			break;
+
+		for (i = 0; i != txq->tx_rs_thresh &&
+			tx_id != tx_first; i++) {
+			/* last segment in the packet,
+			 * increment packet count
+			 */
+			pkt_cnt += (tx_id == swr_ring[tx_id].last_id);
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_first)
+			break;
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	ice_tx_done_cleanup_t func = ice_get_tx_done_cleanup_func();
+
+	if (!func)
+		return -ENOTSUP;
+
+	return func(txq, free_cnt);
+}
+
+void
+ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_t fn)
+{
+	ice_tx_done_cleanup_op = fn;
+}
+
+ice_tx_done_cleanup_t
+ice_get_tx_done_cleanup_func(void)
+{
+	return ice_tx_done_cleanup_op;
+}
+
 /* Populate 4 descriptors with data from 4 mbufs */
 static inline void
 tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
@@ -3003,6 +3155,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 				    ice_xmit_pkts_vec_avx2 :
 				    ice_xmit_pkts_vec;
 		dev->tx_pkt_prepare = NULL;
+		ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_vec);
 
 		return;
 	}
@@ -3012,10 +3165,12 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts_simple;
 		dev->tx_pkt_prepare = NULL;
+		ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_simple);
 	} else {
 		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		dev->tx_pkt_burst = ice_xmit_pkts;
 		dev->tx_pkt_prepare = ice_prep_pkts;
+		ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_scalar);
 	}
 }
 
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..151bead62 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -135,6 +135,9 @@ union ice_tx_offload {
 	};
 };
 
+typedef int (*ice_tx_done_cleanup_t)(struct ice_tx_queue *txq,
+				uint32_t free_cnt);
+
 int ice_rx_queue_setup(struct rte_eth_dev *dev,
 		       uint16_t queue_idx,
 		       uint16_t nb_desc,
@@ -183,6 +186,7 @@ int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
 int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
@@ -202,4 +206,11 @@ uint16_t ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
 uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
+void ice_set_tx_done_cleanup_func(ice_tx_done_cleanup_t fn);
+ice_tx_done_cleanup_t ice_get_tx_done_cleanup_func(void);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_scalar(struct ice_tx_queue *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_vec(struct ice_tx_queue *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_simple(struct ice_tx_queue *txq, uint32_t free_cnt);
+
 #endif /* _ICE_RXTX_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 1/4] net/i40e: " Chenxu Di
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 2/4] net/ice: " Chenxu Di
@ 2020-01-09 10:38   ` Chenxu Di
  2020-01-09 14:01     ` Ananyev, Konstantin
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 4/4] net/e1000: " Chenxu Di
  3 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2020-01-09 10:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   4 +
 drivers/net/ixgbe/ixgbe_rxtx.c   | 156 ++++++++++++++++++++++++++++++-
 drivers/net/ixgbe/ixgbe_rxtx.h   |  10 ++
 3 files changed, 169 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..668c36188 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
@@ -1101,6 +1103,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
 	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
 	eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
+	ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
 
 	/*
 	 * For secondary processes, we don't initialise any further as primary
@@ -1580,6 +1583,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
 	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
+	ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
 
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work. Only check we don't need a different
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..122dae425 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -92,6 +92,8 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 				    uint16_t nb_pkts);
 #endif
 
+static ixgbe_tx_done_cleanup_t ixgbe_tx_done_cleanup_op;
+
 /*********************************************************************
  *
  *  TX functions
@@ -2306,6 +2308,152 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int
+ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0)
+		if (ixgbe_xmit_cleanup(txq))
+			return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_last || txq->tx_rs_thresh
+			> txq->nb_tx_desc - txq->nb_tx_free)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ixgbe_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u Packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+int
+ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint16_t i;
+	uint16_t tx_first;
+	uint16_t tx_id;
+	uint32_t pkt_cnt;
+	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from tx_first */
+	tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
+	tx_id  = tx_first;
+
+	/* while free_cnt is 0,
+	 * suppose one mbuf per packet,
+	 * try to free packets as many as possible
+	 */
+	if (free_cnt == 0)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count freeable packets */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		if (!ixgbe_tx_free_bufs(txq))
+			break;
+
+		for (i = 0; i != txq->tx_rs_thresh &&
+			tx_id != tx_first; i++) {
+			/* last segment in the packet,
+			 * increment packet count
+			 */
+			pkt_cnt += (tx_id == swr_ring[tx_id].last_id);
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (tx_id == tx_first)
+			break;
+	}
+
+	PMD_TX_FREE_LOG(DEBUG,
+		"Free %u packets successfully "
+		"(port=%d queue=%d)",
+		pkt_cnt, txq->port_id, txq->queue_id);
+
+	return (int)pkt_cnt;
+}
+
+int
+ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	ixgbe_tx_done_cleanup_t func = ixgbe_get_tx_done_cleanup_func();
+
+	if (!func)
+		return -ENOTSUP;
+
+	return func(txq, free_cnt);
+}
+
+void
+ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn)
+{
+	ixgbe_tx_done_cleanup_op = fn;
+}
+
+ixgbe_tx_done_cleanup_t
+ixgbe_get_tx_done_cleanup_func(void)
+{
+	return ixgbe_tx_done_cleanup_op;
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
@@ -2398,9 +2546,14 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
 					ixgbe_txq_vec_setup(txq) == 0)) {
 			PMD_INIT_LOG(DEBUG, "Vector tx enabled.");
 			dev->tx_pkt_burst = ixgbe_xmit_pkts_vec;
-		} else
+			ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_vec);
+		} else {
 #endif
 		dev->tx_pkt_burst = ixgbe_xmit_pkts_simple;
+		ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_simple);
+#ifdef RTE_IXGBE_INC_VECTOR
+		}
+#endif
 	} else {
 		PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
 		PMD_INIT_LOG(DEBUG,
@@ -2412,6 +2565,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
 				(unsigned long)RTE_PMD_IXGBE_TX_MAX_BURST);
 		dev->tx_pkt_burst = ixgbe_xmit_pkts;
 		dev->tx_pkt_prepare = ixgbe_prep_pkts;
+		ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
 	}
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..a52597aa9 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -253,6 +253,8 @@ struct ixgbe_txq_ops {
 			 IXGBE_ADVTXD_DCMD_DEXT |\
 			 IXGBE_ADVTXD_DCMD_EOP)
 
+typedef int (*ixgbe_tx_done_cleanup_t)(struct ixgbe_tx_queue *txq,
+				uint32_t free_cnt);
 
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
@@ -285,6 +287,14 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+void ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn);
+ixgbe_tx_done_cleanup_t ixgbe_get_tx_done_cleanup_func(void);
+
+int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v7 4/4] net/e1000: cleanup Tx buffers
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-09 10:38   ` Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-09 10:38 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-09 14:01     ` Ananyev, Konstantin
  2020-01-10 10:08       ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-09 14:01 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX


Hi Chenxu,

Good progress wih _full_version, but still some issues remains I think.
More comments inline.
Konstantin

> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   4 +
>  drivers/net/ixgbe/ixgbe_rxtx.c   | 156 ++++++++++++++++++++++++++++++-
>  drivers/net/ixgbe/ixgbe_rxtx.h   |  10 ++
>  3 files changed, 169 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 2c6fd0f13..668c36188 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
>  	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
>  	.tm_ops_get           = ixgbe_tm_ops_get,
> +	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
>  };
> 
>  /*
> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
>  	.reta_query           = ixgbe_dev_rss_reta_query,
>  	.rss_hash_update      = ixgbe_dev_rss_hash_update,
>  	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> +	.tx_done_cleanup      = ixgbe_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */
> @@ -1101,6 +1103,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>  	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
>  	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
>  	eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> +	ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> 
>  	/*
>  	 * For secondary processes, we don't initialise any further as primary
> @@ -1580,6 +1583,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
>  	eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
>  	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
>  	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> +	ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> 
>  	/* for secondary processes, we don't initialise any further as primary
>  	 * has already done this work. Only check we don't need a different
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index fa572d184..122dae425 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -92,6 +92,8 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
>  				    uint16_t nb_pkts);
>  #endif
> 
> +static ixgbe_tx_done_cleanup_t ixgbe_tx_done_cleanup_op;

You can't have just one static variable here.
There could be several ixgbe devices and they could be configured in a different way.
I.E. txpkt_burst() is per device, so tx_done_cleanup() also has to be per device.
Probably the easiest way is to add new entry for tx_done_cleanup into struct ixgbe_txq_ops,
and set it properly in ixgbe_set_tx_function().

> +
>  /*********************************************************************
>   *
>   *  TX functions
> @@ -2306,6 +2308,152 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
>  	}
>  }
> 
> +int
> +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t free_cnt)

As a nit I would change _scalar to _full or so.

> +{
> +	uint32_t pkt_cnt;
> +	uint16_t i;
> +	uint16_t tx_last;
> +	uint16_t tx_id;
> +	uint16_t nb_tx_to_clean;
> +	uint16_t nb_tx_free_last;
> +	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> +
> +	/* Start free mbuf from the next of tx_tail */
> +	tx_last = txq->tx_tail;
> +	tx_id  = swr_ring[tx_last].next_id;
> +
> +	if (txq->nb_tx_free == 0)
> +		if (ixgbe_xmit_cleanup(txq))


As a nit it could be just if (ixgbe_set_tx_function && ixgbe_xmit_cleanup(txq))

> +			return 0;
> +
> +	nb_tx_to_clean = txq->nb_tx_free;
> +	nb_tx_free_last = txq->nb_tx_free;
> +	if (!free_cnt)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	/* Loop through swr_ring to count the amount of
> +	 * freeable mubfs and packets.
> +	 */
> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
> +		for (i = 0; i < nb_tx_to_clean &&
> +			pkt_cnt < free_cnt &&
> +			tx_id != tx_last; i++) {
> +			if (swr_ring[tx_id].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> +				swr_ring[tx_id].mbuf = NULL;
> +
> +				/*
> +				 * last segment in the packet,
> +				 * increment packet count
> +				 */
> +				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
> +			}
> +
> +			tx_id = swr_ring[tx_id].next_id;
> +		}
> +
> +		if (tx_id == tx_last || txq->tx_rs_thresh
> +			> txq->nb_tx_desc - txq->nb_tx_free)

First condition (tx_id == tx_last) is porbably redundant here.

> +			break;
> +
> +		if (pkt_cnt < free_cnt) {
> +			if (ixgbe_xmit_cleanup(txq))
> +				break;
> +
> +			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
> +			nb_tx_free_last = txq->nb_tx_free;
> +		}
> +	}
> +
> +	PMD_TX_FREE_LOG(DEBUG,
> +		"Free %u Packets successfully "
> +		"(port=%d queue=%d)",
> +		pkt_cnt, txq->port_id, txq->queue_id);
> +
> +	return (int)pkt_cnt;
> +}
> +
> +int
> +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
> +			uint32_t free_cnt __rte_unused)
> +{
> +	return -ENOTSUP;
> +}
> +
> +int
> +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	uint16_t i;
> +	uint16_t tx_first;
> +	uint16_t tx_id;
> +	uint32_t pkt_cnt;
> +	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;


Looks overcomplicated here.
TX simple (and vec) doesn't support mulsti-seg packets, 
So one TXD - one mbuf, and one packet.
And ixgbe_tx_free_bufs() always retunrs/frees either 0 or tx_rs_thresh mbufs/packets.
So it probably can be something like that:

ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,  uint32_t free_cnt)
{
   If (free_cnt == 0)
      free_cnt = txq->nb_desc;
 
   cnt = free_cnt - free_cnt % txq->tx_rs_thesh;
    for (i = 0; i < cnt; i+= n) {
          n = ixgbe_tx_free_bufs(txq);
          if (n == 0)
             break;
    } 
    return i;
}

> +
> +	/* Start free mbuf from tx_first */
> +	tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1);
> +	tx_id  = tx_first;
> +
> +	/* while free_cnt is 0,
> +	 * suppose one mbuf per packet,
> +	 * try to free packets as many as possible
> +	 */
> +	if (free_cnt == 0)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	/* Loop through swr_ring to count freeable packets */
> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> +			break;
> +
> +		if (!ixgbe_tx_free_bufs(txq))
> +			break;
> +
> +		for (i = 0; i != txq->tx_rs_thresh &&
> +			tx_id != tx_first; i++) {
> +			/* last segment in the packet,
> +			 * increment packet count
> +			 */
> +			pkt_cnt += (tx_id == swr_ring[tx_id].last_id);
> +			tx_id = swr_ring[tx_id].next_id;
> +		}
> +
> +		if (tx_id == tx_first)
> +			break;
> +	}
> +
> +	PMD_TX_FREE_LOG(DEBUG,
> +		"Free %u packets successfully "
> +		"(port=%d queue=%d)",
> +		pkt_cnt, txq->port_id, txq->queue_id);
> +
> +	return (int)pkt_cnt;
> +}
> +
> +int
> +ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt)
> +{
> +	ixgbe_tx_done_cleanup_t func = ixgbe_get_tx_done_cleanup_func();
> +
> +	if (!func)
> +		return -ENOTSUP;
> +
> +	return func(txq, free_cnt);
> +}
> +
> +void
> +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn)
> +{
> +	ixgbe_tx_done_cleanup_op = fn;
> +}
> +
> +ixgbe_tx_done_cleanup_t
> +ixgbe_get_tx_done_cleanup_func(void)
> +{
> +	return ixgbe_tx_done_cleanup_op;
> +}
> +
>  static void __attribute__((cold))
>  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
>  {
> @@ -2398,9 +2546,14 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
>  					ixgbe_txq_vec_setup(txq) == 0)) {
>  			PMD_INIT_LOG(DEBUG, "Vector tx enabled.");
>  			dev->tx_pkt_burst = ixgbe_xmit_pkts_vec;
> -		} else
> +			ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_vec);
> +		} else {
>  #endif
>  		dev->tx_pkt_burst = ixgbe_xmit_pkts_simple;
> +		ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_simple);
> +#ifdef RTE_IXGBE_INC_VECTOR
> +		}
> +#endif
>  	} else {
>  		PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
>  		PMD_INIT_LOG(DEBUG,
> @@ -2412,6 +2565,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq)
>  				(unsigned long)RTE_PMD_IXGBE_TX_MAX_BURST);
>  		dev->tx_pkt_burst = ixgbe_xmit_pkts;
>  		dev->tx_pkt_prepare = ixgbe_prep_pkts;
> +		ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
>  	}
>  }
> 
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 505d344b9..a52597aa9 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -253,6 +253,8 @@ struct ixgbe_txq_ops {
>  			 IXGBE_ADVTXD_DCMD_DEXT |\
>  			 IXGBE_ADVTXD_DCMD_EOP)
> 
> +typedef int (*ixgbe_tx_done_cleanup_t)(struct ixgbe_tx_queue *txq,
> +				uint32_t free_cnt);
> 
>  /* Takes an ethdev and a queue and sets up the tx function to be used based on
>   * the queue parameters. Used in tx_queue_setup by primary process and then
> @@ -285,6 +287,14 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
>  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
>  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
> 
> +void ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn);
> +ixgbe_tx_done_cleanup_t ixgbe_get_tx_done_cleanup_func(void);
> +
> +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +
>  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
>  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v8 0/4] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (8 preceding siblings ...)
  2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
@ 2020-01-10  9:58 ` Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 1/4] net/i40e: " Chenxu Di
                     ` (3 more replies)
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
  2020-01-14  2:22 ` [dpdk-dev] [PATCH 0/4] drivers/net: " Ye Xiaolong
  11 siblings, 4 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-10  9:58 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding i40e, ice, ixgbe 
and igb vf for the API rte_eth_tx_done_cleanup to force
 free consumed buffers on Tx ring.

---
v8:
deleted function pointer by using other way.
v7:
changed the design of code, reuse exist function.

Chenxu Di (4):
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c          |   1 +
 drivers/net/i40e/i40e_ethdev.c          |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c       |   1 +
 drivers/net/i40e/i40e_rxtx.c            | 109 +++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h            |   4 +
 drivers/net/ice/ice_ethdev.c            |   1 +
 drivers/net/ice/ice_rxtx.c              | 113 ++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h              |   5 ++
 drivers/net/ixgbe/ixgbe_ethdev.c        |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c          | 109 +++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h          |   8 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c |   1 +
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c  |   1 +
 13 files changed, 355 insertions(+), 1 deletion(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v8 1/4] net/i40e: cleanup Tx buffers
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
@ 2020-01-10  9:58   ` Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 2/4] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-10  9:58 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 109 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   4 ++
 4 files changed, 115 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..9b3a504f3 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,115 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+int
+i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct i40e_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && i40e_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (i40e_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+int
+i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = i40e_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+int
+i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+int
+i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
+	struct i40e_adapter *ad =
+		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		if (ad->tx_vec_allowed)
+			return i40e_tx_done_cleanup_vec(q, free_cnt);
+		else
+			return i40e_tx_done_cleanup_simple(q, free_cnt);
+	} else {
+		return i40e_tx_done_cleanup_full(q, free_cnt);
+	}
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..4c0dd7374 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,10 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq, uint32_t free_cnt);
+int i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v8 2/4] net/ice: cleanup Tx buffers
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 1/4] net/i40e: " Chenxu Di
@ 2020-01-10  9:58   ` Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 3/4] net/ixgbe: " Chenxu Di
  2020-01-10  9:59   ` [dpdk-dev] [PATCH v8 4/4] net/e1000: " Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-10  9:58 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   1 +
 drivers/net/ice/ice_rxtx.c   | 113 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |   5 ++
 3 files changed, 119 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..e6d14ad1a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,9 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
@@ -2643,6 +2646,116 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	return txq->tx_rs_thresh;
 }
 
+int
+ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct ice_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && ice_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ice_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+int
+ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+int
+ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = ice_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+int
+ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+#ifdef RTE_ARCH_X86
+	if (ad->tx_vec_allowed)
+		return ice_tx_done_cleanup_vec(q, free_cnt);
+#endif
+	if (ad->tx_simple_allowed)
+		return ice_tx_done_cleanup_simple(q, free_cnt);
+	else
+		return ice_tx_done_cleanup_full(q, free_cnt);
+}
+
 /* Populate 4 descriptors with data from 4 mbufs */
 static inline void
 tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..c6b547b82 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -202,4 +202,9 @@ uint16_t ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
 uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_full(struct ice_tx_queue *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_vec(struct ice_tx_queue *txq, uint32_t free_cnt);
+int ice_tx_done_cleanup_simple(struct ice_tx_queue *txq, uint32_t free_cnt);
+
 #endif /* _ICE_RXTX_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v8 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 1/4] net/i40e: " Chenxu Di
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 2/4] net/ice: " Chenxu Di
@ 2020-01-10  9:58   ` Chenxu Di
  2020-01-10 13:49     ` Ananyev, Konstantin
  2020-01-10  9:59   ` [dpdk-dev] [PATCH v8 4/4] net/e1000: " Chenxu Di
  3 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2020-01-10  9:58 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c        |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c          | 109 ++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h          |   8 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c |   1 +
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c  |   1 +
 5 files changed, 120 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..75bdd391a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..23c897d3a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,114 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+int
+ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+{
+	uint32_t pkt_cnt;
+	uint16_t i;
+	uint16_t tx_last;
+	uint16_t tx_id;
+	uint16_t nb_tx_to_clean;
+	uint16_t nb_tx_free_last;
+	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && ixgbe_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ixgbe_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+int
+ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+int
+ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = ixgbe_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+int
+ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+	return txq->ops->txq_done_cleanup(txq, free_cnt);
+}
+
+int
+ixgbe_tx_done_cleanup(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+{
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (txq->offloads == 0)
+		return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
+	else
+		return ixgbe_tx_done_cleanup_full(txq, free_cnt);
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
@@ -2375,6 +2483,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
 	.release_mbufs = ixgbe_tx_queue_release_mbufs,
 	.free_swring = ixgbe_tx_free_swring,
 	.reset = ixgbe_reset_tx_queue,
+	.txq_done_cleanup = ixgbe_tx_done_cleanup,
 };
 
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..41a3738ce 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -238,6 +238,7 @@ struct ixgbe_txq_ops {
 	void (*release_mbufs)(struct ixgbe_tx_queue *txq);
 	void (*free_swring)(struct ixgbe_tx_queue *txq);
 	void (*reset)(struct ixgbe_tx_queue *txq);
+	int (*txq_done_cleanup)(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
 };
 
 /*
@@ -253,7 +254,6 @@ struct ixgbe_txq_ops {
 			 IXGBE_ADVTXD_DCMD_DEXT |\
 			 IXGBE_ADVTXD_DCMD_EOP)
 
-
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
@@ -285,6 +285,12 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
 
+int ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+int ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
+
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index feb86c61e..cd9b7dc01 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -559,6 +559,7 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
 	.release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
 	.free_swring = ixgbe_tx_free_swring,
 	.reset = ixgbe_reset_tx_queue,
+	.txq_done_cleanup = ixgbe_tx_done_cleanup_vec,
 };
 
 int __attribute__((cold))
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 599ba30e5..63bfac9fa 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -730,6 +730,7 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
 	.release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
 	.free_swring = ixgbe_tx_free_swring,
 	.reset = ixgbe_reset_tx_queue,
+	.txq_done_cleanup = ixgbe_tx_done_cleanup_vec,
 };
 
 int __attribute__((cold))
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v8 4/4] net/e1000: cleanup Tx buffers
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-10  9:59   ` Chenxu Di
  3 siblings, 0 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-10  9:59 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-09 14:01     ` Ananyev, Konstantin
@ 2020-01-10 10:08       ` Di, ChenxuX
  2020-01-10 12:46         ` Ananyev, Konstantin
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-10 10:08 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Yang, Qiming

hi, Konstantin 

thanks for your opinion, I have fixed almost in new version patch except one. 

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, January 9, 2020 10:02 PM
> To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
> 
> 
> Hi Chenxu,
> 
> Good progress wih _full_version, but still some issues remains I think.
> More comments inline.
> Konstantin
> 
> >
> > Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c |   4 +
> >  drivers/net/ixgbe/ixgbe_rxtx.c   | 156 ++++++++++++++++++++++++++++++-
> >  drivers/net/ixgbe/ixgbe_rxtx.h   |  10 ++
> >  3 files changed, 169 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 2c6fd0f13..668c36188 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops
> > = {  .udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
> > .udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
> >  .tm_ops_get           = ixgbe_tm_ops_get,
> > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> >  };
> >
> >  /*
> > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops
> = {
> >  .reta_query           = ixgbe_dev_rss_reta_query,
> >  .rss_hash_update      = ixgbe_dev_rss_hash_update,
> >  .rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> >  };
> >
> >  /* store statistics names and its offset in stats structure */ @@
> > -1101,6 +1103,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev,
> > void *init_params __rte_unused)  eth_dev->rx_pkt_burst =
> > &ixgbe_recv_pkts;  eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> > eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> >
> >  /*
> >   * For secondary processes, we don't initialise any further as
> > primary @@ -1580,6 +1583,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev
> > *eth_dev)  eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
> > eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;  eth_dev->tx_pkt_burst =
> > &ixgbe_xmit_pkts;
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> >
> >  /* for secondary processes, we don't initialise any further as primary
> >   * has already done this work. Only check we don't need a different
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..122dae425 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -92,6 +92,8 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue,
> struct rte_mbuf **tx_pkts,
> >      uint16_t nb_pkts);
> >  #endif
> >
> > +static ixgbe_tx_done_cleanup_t ixgbe_tx_done_cleanup_op;
> 
> You can't have just one static variable here.
> There could be several ixgbe devices and they could be configured in a different
> way.
> I.E. txpkt_burst() is per device, so tx_done_cleanup() also has to be per device.
> Probably the easiest way is to add new entry for tx_done_cleanup into struct
> ixgbe_txq_ops, and set it properly in ixgbe_set_tx_function().
> 
> > +
> >
> /****************************************************************
> *****
> >   *
> >   *  TX functions
> > @@ -2306,6 +2308,152 @@ ixgbe_tx_queue_release_mbufs(struct
> > ixgbe_tx_queue *txq)  }  }
> >
> > +int
> > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t
> > +free_cnt)
> 
> As a nit I would change _scalar to _full or so.
> 
> > +{
> > +uint32_t pkt_cnt;
> > +uint16_t i;
> > +uint16_t tx_last;
> > +uint16_t tx_id;
> > +uint16_t nb_tx_to_clean;
> > +uint16_t nb_tx_free_last;
> > +struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> > +
> > +/* Start free mbuf from the next of tx_tail */ tx_last =
> > +txq->tx_tail; tx_id  = swr_ring[tx_last].next_id;
> > +
> > +if (txq->nb_tx_free == 0)
> > +if (ixgbe_xmit_cleanup(txq))
> 
> 
> As a nit it could be just if (ixgbe_set_tx_function && ixgbe_xmit_cleanup(txq))
> 
> > +return 0;
> > +
> > +nb_tx_to_clean = txq->nb_tx_free;
> > +nb_tx_free_last = txq->nb_tx_free;
> > +if (!free_cnt)
> > +free_cnt = txq->nb_tx_desc;
> > +
> > +/* Loop through swr_ring to count the amount of
> > + * freeable mubfs and packets.
> > + */
> > +for (pkt_cnt = 0; pkt_cnt < free_cnt; ) { for (i = 0; i <
> > +nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last; i++) { if
> > +(swr_ring[tx_id].mbuf != NULL) {
> > +rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> > +swr_ring[tx_id].mbuf = NULL;
> > +
> > +/*
> > + * last segment in the packet,
> > + * increment packet count
> > + */
> > +pkt_cnt += (swr_ring[tx_id].last_id == tx_id); }
> > +
> > +tx_id = swr_ring[tx_id].next_id;
> > +}
> > +
> > +if (tx_id == tx_last || txq->tx_rs_thresh
> > +> txq->nb_tx_desc - txq->nb_tx_free)
> 
> First condition (tx_id == tx_last) is porbably redundant here.
> 

I think it is necessary. The txq may transmit packets when the API called.
So txq->nb_tx_free may be changed.

If (tx_id == tx_last) , it will break the loop above and the function should be done and return.
However if more than  txq->tx_rs_thresh numbers packet send into txq while function doing.
It will not return. And fall in endless loop

> > +break;
> > +
> > +if (pkt_cnt < free_cnt) {
> > +if (ixgbe_xmit_cleanup(txq))
> > +break;
> > +
> > +nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last; nb_tx_free_last =
> > +txq->nb_tx_free; } }
> > +
> > +PMD_TX_FREE_LOG(DEBUG,
> > +"Free %u Packets successfully "
> > +"(port=%d queue=%d)",
> > +pkt_cnt, txq->port_id, txq->queue_id);
> > +
> > +return (int)pkt_cnt;
> > +}
> > +
> > +int
> > +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
> > +uint32_t free_cnt __rte_unused) { return -ENOTSUP; }
> > +
> > +int
> > +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t
> > +free_cnt) { uint16_t i; uint16_t tx_first; uint16_t tx_id; uint32_t
> > +pkt_cnt; struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> 
> 
> Looks overcomplicated here.
> TX simple (and vec) doesn't support mulsti-seg packets, So one TXD - one mbuf,
> and one packet.
> And ixgbe_tx_free_bufs() always retunrs/frees either 0 or tx_rs_thresh
> mbufs/packets.
> So it probably can be something like that:
> 
> ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,  uint32_t free_cnt) {
>    If (free_cnt == 0)
>       free_cnt = txq->nb_desc;
> 
>    cnt = free_cnt - free_cnt % txq->tx_rs_thesh;
>     for (i = 0; i < cnt; i+= n) {
>           n = ixgbe_tx_free_bufs(txq);
>           if (n == 0)
>              break;
>     }
>     return i;
> }
> 
> > +
> > +/* Start free mbuf from tx_first */
> > +tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1); tx_id  =
> > +tx_first;
> > +
> > +/* while free_cnt is 0,
> > + * suppose one mbuf per packet,
> > + * try to free packets as many as possible  */ if (free_cnt == 0)
> > +free_cnt = txq->nb_tx_desc;
> > +
> > +/* Loop through swr_ring to count freeable packets */ for (pkt_cnt =
> > +0; pkt_cnt < free_cnt; ) { if (txq->nb_tx_desc - txq->nb_tx_free <
> > +txq->tx_rs_thresh) break;
> > +
> > +if (!ixgbe_tx_free_bufs(txq))
> > +break;
> > +
> > +for (i = 0; i != txq->tx_rs_thresh && tx_id != tx_first; i++) {
> > +/* last segment in the packet,
> > + * increment packet count
> > + */
> > +pkt_cnt += (tx_id == swr_ring[tx_id].last_id); tx_id =
> > +swr_ring[tx_id].next_id; }
> > +
> > +if (tx_id == tx_first)
> > +break;
> > +}
> > +
> > +PMD_TX_FREE_LOG(DEBUG,
> > +"Free %u packets successfully "
> > +"(port=%d queue=%d)",
> > +pkt_cnt, txq->port_id, txq->queue_id);
> > +
> > +return (int)pkt_cnt;
> > +}
> > +
> > +int
> > +ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt) {
> > +ixgbe_tx_done_cleanup_t func = ixgbe_get_tx_done_cleanup_func();
> > +
> > +if (!func)
> > +return -ENOTSUP;
> > +
> > +return func(txq, free_cnt);
> > +}
> > +
> > +void
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn) {
> > +ixgbe_tx_done_cleanup_op = fn; }
> > +
> > +ixgbe_tx_done_cleanup_t
> > +ixgbe_get_tx_done_cleanup_func(void)
> > +{
> > +return ixgbe_tx_done_cleanup_op;
> > +}
> > +
> >  static void __attribute__((cold))
> >  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { @@ -2398,9
> > +2546,14 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct
> > ixgbe_tx_queue *txq)
> >  ixgbe_txq_vec_setup(txq) == 0)) {
> >  PMD_INIT_LOG(DEBUG, "Vector tx enabled.");  dev->tx_pkt_burst =
> > ixgbe_xmit_pkts_vec; -} else
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_vec);
> > +} else {
> >  #endif
> >  dev->tx_pkt_burst = ixgbe_xmit_pkts_simple;
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_simple);
> > +#ifdef RTE_IXGBE_INC_VECTOR
> > +}
> > +#endif
> >  } else {
> >  PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
> > PMD_INIT_LOG(DEBUG, @@ -2412,6 +2565,7 @@
> ixgbe_set_tx_function(struct
> > rte_eth_dev *dev, struct ixgbe_tx_queue *txq)  (unsigned
> > long)RTE_PMD_IXGBE_TX_MAX_BURST);  dev->tx_pkt_burst =
> > ixgbe_xmit_pkts;  dev->tx_pkt_prepare = ixgbe_prep_pkts;
> > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> >  }
> >  }
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h
> > b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9..a52597aa9 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > @@ -253,6 +253,8 @@ struct ixgbe_txq_ops {
> >   IXGBE_ADVTXD_DCMD_DEXT |\
> >   IXGBE_ADVTXD_DCMD_EOP)
> >
> > +typedef int (*ixgbe_tx_done_cleanup_t)(struct ixgbe_tx_queue *txq,
> > +uint32_t free_cnt);
> >
> >  /* Takes an ethdev and a queue and sets up the tx function to be used based
> on
> >   * the queue parameters. Used in tx_queue_setup by primary process
> > and then @@ -285,6 +287,14 @@ int
> > ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);  int
> > ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);  void
> > ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
> >
> > +void ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn);
> > +ixgbe_tx_done_cleanup_t ixgbe_get_tx_done_cleanup_func(void);
> > +
> > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt); int
> > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t
> > +free_cnt); int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq,
> > +uint32_t free_cnt); int ixgbe_tx_done_cleanup_simple(struct
> > +ixgbe_tx_queue *txq, uint32_t free_cnt);
> > +
> >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> >
> > --
> > 2.17.1
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-10 10:08       ` Di, ChenxuX
@ 2020-01-10 12:46         ` Ananyev, Konstantin
  0 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-10 12:46 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming


Hi Chenxu,

> hi, Konstantin
> 
> thanks for your opinion, I have fixed almost in new version patch except one.
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, January 9, 2020 10:02 PM
> > To: Di, ChenxuX <chenxux.di@intel.com>; dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> > <chenxux.di@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v7 3/4] net/ixgbe: cleanup Tx buffers
> >
> >
> > Hi Chenxu,
> >
> > Good progress wih _full_version, but still some issues remains I think.
> > More comments inline.
> > Konstantin
> >
> > >
> > > Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> > > ---
> > >  drivers/net/ixgbe/ixgbe_ethdev.c |   4 +
> > >  drivers/net/ixgbe/ixgbe_rxtx.c   | 156 ++++++++++++++++++++++++++++++-
> > >  drivers/net/ixgbe/ixgbe_rxtx.h   |  10 ++
> > >  3 files changed, 169 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > > index 2c6fd0f13..668c36188 100644
> > > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > > @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops
> > > = {  .udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
> > > .udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
> > >  .tm_ops_get           = ixgbe_tm_ops_get,
> > > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> > >  };
> > >
> > >  /*
> > > @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops
> > = {
> > >  .reta_query           = ixgbe_dev_rss_reta_query,
> > >  .rss_hash_update      = ixgbe_dev_rss_hash_update,
> > >  .rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> > > +.tx_done_cleanup      = ixgbe_tx_done_cleanup,
> > >  };
> > >
> > >  /* store statistics names and its offset in stats structure */ @@
> > > -1101,6 +1103,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev,
> > > void *init_params __rte_unused)  eth_dev->rx_pkt_burst =
> > > &ixgbe_recv_pkts;  eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
> > > eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> > >
> > >  /*
> > >   * For secondary processes, we don't initialise any further as
> > > primary @@ -1580,6 +1583,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev
> > > *eth_dev)  eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
> > > eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;  eth_dev->tx_pkt_burst =
> > > &ixgbe_xmit_pkts;
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> > >
> > >  /* for secondary processes, we don't initialise any further as primary
> > >   * has already done this work. Only check we don't need a different
> > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > b/drivers/net/ixgbe/ixgbe_rxtx.c index fa572d184..122dae425 100644
> > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > > @@ -92,6 +92,8 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue,
> > struct rte_mbuf **tx_pkts,
> > >      uint16_t nb_pkts);
> > >  #endif
> > >
> > > +static ixgbe_tx_done_cleanup_t ixgbe_tx_done_cleanup_op;
> >
> > You can't have just one static variable here.
> > There could be several ixgbe devices and they could be configured in a different
> > way.
> > I.E. txpkt_burst() is per device, so tx_done_cleanup() also has to be per device.
> > Probably the easiest way is to add new entry for tx_done_cleanup into struct
> > ixgbe_txq_ops, and set it properly in ixgbe_set_tx_function().
> >
> > > +
> > >
> > /****************************************************************
> > *****
> > >   *
> > >   *  TX functions
> > > @@ -2306,6 +2308,152 @@ ixgbe_tx_queue_release_mbufs(struct
> > > ixgbe_tx_queue *txq)  }  }
> > >
> > > +int
> > > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t
> > > +free_cnt)
> >
> > As a nit I would change _scalar to _full or so.
> >
> > > +{
> > > +uint32_t pkt_cnt;
> > > +uint16_t i;
> > > +uint16_t tx_last;
> > > +uint16_t tx_id;
> > > +uint16_t nb_tx_to_clean;
> > > +uint16_t nb_tx_free_last;
> > > +struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> > > +
> > > +/* Start free mbuf from the next of tx_tail */ tx_last =
> > > +txq->tx_tail; tx_id  = swr_ring[tx_last].next_id;
> > > +
> > > +if (txq->nb_tx_free == 0)
> > > +if (ixgbe_xmit_cleanup(txq))
> >
> >
> > As a nit it could be just if (ixgbe_set_tx_function && ixgbe_xmit_cleanup(txq))
> >
> > > +return 0;
> > > +
> > > +nb_tx_to_clean = txq->nb_tx_free;
> > > +nb_tx_free_last = txq->nb_tx_free;
> > > +if (!free_cnt)
> > > +free_cnt = txq->nb_tx_desc;
> > > +
> > > +/* Loop through swr_ring to count the amount of
> > > + * freeable mubfs and packets.
> > > + */
> > > +for (pkt_cnt = 0; pkt_cnt < free_cnt; ) { for (i = 0; i <
> > > +nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last; i++) { if
> > > +(swr_ring[tx_id].mbuf != NULL) {
> > > +rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> > > +swr_ring[tx_id].mbuf = NULL;
> > > +
> > > +/*
> > > + * last segment in the packet,
> > > + * increment packet count
> > > + */
> > > +pkt_cnt += (swr_ring[tx_id].last_id == tx_id); }
> > > +
> > > +tx_id = swr_ring[tx_id].next_id;
> > > +}
> > > +
> > > +if (tx_id == tx_last || txq->tx_rs_thresh
> > > +> txq->nb_tx_desc - txq->nb_tx_free)
> >
> > First condition (tx_id == tx_last) is porbably redundant here.
> >
> 
> I think it is necessary. The txq may transmit packets when the API called.

Nope it is not possible.
All ethdev RX/TX API is not thread safe.
It will be a race condition that most likely will cause either crash or memory corruption.

> So txq->nb_tx_free may be changed.
> 
> If (tx_id == tx_last) , it will break the loop above and the function should be done and return.
> However if more than  txq->tx_rs_thresh numbers packet send into txq while function doing.
> It will not return. And fall in endless loop
> 
> > > +break;
> > > +
> > > +if (pkt_cnt < free_cnt) {
> > > +if (ixgbe_xmit_cleanup(txq))
> > > +break;
> > > +
> > > +nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last; nb_tx_free_last =
> > > +txq->nb_tx_free; } }
> > > +
> > > +PMD_TX_FREE_LOG(DEBUG,
> > > +"Free %u Packets successfully "
> > > +"(port=%d queue=%d)",
> > > +pkt_cnt, txq->port_id, txq->queue_id);
> > > +
> > > +return (int)pkt_cnt;
> > > +}
> > > +
> > > +int
> > > +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
> > > +uint32_t free_cnt __rte_unused) { return -ENOTSUP; }
> > > +
> > > +int
> > > +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t
> > > +free_cnt) { uint16_t i; uint16_t tx_first; uint16_t tx_id; uint32_t
> > > +pkt_cnt; struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> >
> >
> > Looks overcomplicated here.
> > TX simple (and vec) doesn't support mulsti-seg packets, So one TXD - one mbuf,
> > and one packet.
> > And ixgbe_tx_free_bufs() always retunrs/frees either 0 or tx_rs_thresh
> > mbufs/packets.
> > So it probably can be something like that:
> >
> > ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,  uint32_t free_cnt) {
> >    If (free_cnt == 0)
> >       free_cnt = txq->nb_desc;
> >
> >    cnt = free_cnt - free_cnt % txq->tx_rs_thesh;
> >     for (i = 0; i < cnt; i+= n) {
> >           n = ixgbe_tx_free_bufs(txq);
> >           if (n == 0)
> >              break;
> >     }
> >     return i;
> > }
> >
> > > +
> > > +/* Start free mbuf from tx_first */
> > > +tx_first = txq->tx_next_dd - (txq->tx_rs_thresh - 1); tx_id  =
> > > +tx_first;
> > > +
> > > +/* while free_cnt is 0,
> > > + * suppose one mbuf per packet,
> > > + * try to free packets as many as possible  */ if (free_cnt == 0)
> > > +free_cnt = txq->nb_tx_desc;
> > > +
> > > +/* Loop through swr_ring to count freeable packets */ for (pkt_cnt =
> > > +0; pkt_cnt < free_cnt; ) { if (txq->nb_tx_desc - txq->nb_tx_free <
> > > +txq->tx_rs_thresh) break;
> > > +
> > > +if (!ixgbe_tx_free_bufs(txq))
> > > +break;
> > > +
> > > +for (i = 0; i != txq->tx_rs_thresh && tx_id != tx_first; i++) {
> > > +/* last segment in the packet,
> > > + * increment packet count
> > > + */
> > > +pkt_cnt += (tx_id == swr_ring[tx_id].last_id); tx_id =
> > > +swr_ring[tx_id].next_id; }
> > > +
> > > +if (tx_id == tx_first)
> > > +break;
> > > +}
> > > +
> > > +PMD_TX_FREE_LOG(DEBUG,
> > > +"Free %u packets successfully "
> > > +"(port=%d queue=%d)",
> > > +pkt_cnt, txq->port_id, txq->queue_id);
> > > +
> > > +return (int)pkt_cnt;
> > > +}
> > > +
> > > +int
> > > +ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt) {
> > > +ixgbe_tx_done_cleanup_t func = ixgbe_get_tx_done_cleanup_func();
> > > +
> > > +if (!func)
> > > +return -ENOTSUP;
> > > +
> > > +return func(txq, free_cnt);
> > > +}
> > > +
> > > +void
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn) {
> > > +ixgbe_tx_done_cleanup_op = fn; }
> > > +
> > > +ixgbe_tx_done_cleanup_t
> > > +ixgbe_get_tx_done_cleanup_func(void)
> > > +{
> > > +return ixgbe_tx_done_cleanup_op;
> > > +}
> > > +
> > >  static void __attribute__((cold))
> > >  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)  { @@ -2398,9
> > > +2546,14 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct
> > > ixgbe_tx_queue *txq)
> > >  ixgbe_txq_vec_setup(txq) == 0)) {
> > >  PMD_INIT_LOG(DEBUG, "Vector tx enabled.");  dev->tx_pkt_burst =
> > > ixgbe_xmit_pkts_vec; -} else
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_vec);
> > > +} else {
> > >  #endif
> > >  dev->tx_pkt_burst = ixgbe_xmit_pkts_simple;
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_simple);
> > > +#ifdef RTE_IXGBE_INC_VECTOR
> > > +}
> > > +#endif
> > >  } else {
> > >  PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
> > > PMD_INIT_LOG(DEBUG, @@ -2412,6 +2565,7 @@
> > ixgbe_set_tx_function(struct
> > > rte_eth_dev *dev, struct ixgbe_tx_queue *txq)  (unsigned
> > > long)RTE_PMD_IXGBE_TX_MAX_BURST);  dev->tx_pkt_burst =
> > > ixgbe_xmit_pkts;  dev->tx_pkt_prepare = ixgbe_prep_pkts;
> > > +ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_scalar);
> > >  }
> > >  }
> > >
> > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9..a52597aa9 100644
> > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> > > @@ -253,6 +253,8 @@ struct ixgbe_txq_ops {
> > >   IXGBE_ADVTXD_DCMD_DEXT |\
> > >   IXGBE_ADVTXD_DCMD_EOP)
> > >
> > > +typedef int (*ixgbe_tx_done_cleanup_t)(struct ixgbe_tx_queue *txq,
> > > +uint32_t free_cnt);
> > >
> > >  /* Takes an ethdev and a queue and sets up the tx function to be used based
> > on
> > >   * the queue parameters. Used in tx_queue_setup by primary process
> > > and then @@ -285,6 +287,14 @@ int
> > > ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);  int
> > > ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);  void
> > > ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
> > >
> > > +void ixgbe_set_tx_done_cleanup_func(ixgbe_tx_done_cleanup_t fn);
> > > +ixgbe_tx_done_cleanup_t ixgbe_get_tx_done_cleanup_func(void);
> > > +
> > > +int ixgbe_tx_done_cleanup(void *txq, uint32_t free_cnt); int
> > > +ixgbe_tx_done_cleanup_scalar(struct ixgbe_tx_queue *txq, uint32_t
> > > +free_cnt); int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq,
> > > +uint32_t free_cnt); int ixgbe_tx_done_cleanup_simple(struct
> > > +ixgbe_tx_queue *txq, uint32_t free_cnt);
> > > +
> > >  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
> > >  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> > >
> > > --
> > > 2.17.1
> >
> 


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v8 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-10 13:49     ` Ananyev, Konstantin
  0 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-10 13:49 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX


> 
> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c        |   2 +
>  drivers/net/ixgbe/ixgbe_rxtx.c          | 109 ++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_rxtx.h          |   8 +-
>  drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c |   1 +
>  drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c  |   1 +
>  5 files changed, 120 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 2c6fd0f13..75bdd391a 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
>  	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
>  	.tm_ops_get           = ixgbe_tm_ops_get,
> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>  };
> 
>  /*
> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
>  	.reta_query           = ixgbe_dev_rss_reta_query,
>  	.rss_hash_update      = ixgbe_dev_rss_hash_update,
>  	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index fa572d184..23c897d3a 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2306,6 +2306,114 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
>  	}
>  }
> 
> +int
> +ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
> +{
> +	uint32_t pkt_cnt;
> +	uint16_t i;
> +	uint16_t tx_last;
> +	uint16_t tx_id;
> +	uint16_t nb_tx_to_clean;
> +	uint16_t nb_tx_free_last;
> +	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> +
> +	/* Start free mbuf from the next of tx_tail */
> +	tx_last = txq->tx_tail;
> +	tx_id  = swr_ring[tx_last].next_id;
> +
> +	if (txq->nb_tx_free == 0 && ixgbe_xmit_cleanup(txq))
> +		return 0;
> +
> +	nb_tx_to_clean = txq->nb_tx_free;
> +	nb_tx_free_last = txq->nb_tx_free;
> +	if (!free_cnt)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	/* Loop through swr_ring to count the amount of
> +	 * freeable mubfs and packets.
> +	 */
> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
> +		for (i = 0; i < nb_tx_to_clean &&
> +			pkt_cnt < free_cnt &&
> +			tx_id != tx_last; i++) {
> +			if (swr_ring[tx_id].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> +				swr_ring[tx_id].mbuf = NULL;
> +
> +				/*
> +				 * last segment in the packet,
> +				 * increment packet count
> +				 */
> +				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
> +			}
> +
> +			tx_id = swr_ring[tx_id].next_id;
> +		}
> +
> +		if (txq->tx_rs_thresh > txq->nb_tx_desc -
> +			txq->nb_tx_free || tx_id == tx_last)
> +			break;
> +
> +		if (pkt_cnt < free_cnt) {
> +			if (ixgbe_xmit_cleanup(txq))
> +				break;
> +
> +			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
> +			nb_tx_free_last = txq->nb_tx_free;
> +		}
> +	}
> +
> +	return (int)pkt_cnt;
> +}
> +
> +int
> +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
> +			uint32_t free_cnt __rte_unused)
> +{
> +	return -ENOTSUP;
> +}
> +
> +int
> +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	int i, n, cnt;
> +
> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
> +
> +	for (i = 0; i < cnt; i += n) {
> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> +			break;
> +
> +		n = ixgbe_tx_free_bufs(txq);
> +
> +		if (n == 0)
> +			break;
> +	}
> +
> +	return i;
> +}
> +
> +int
> +ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
> +{
> +	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
> +	return txq->ops->txq_done_cleanup(txq, free_cnt);
> +}
> +
> +int
> +ixgbe_tx_done_cleanup(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
> +{
> +	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
> +	if (txq->offloads == 0)
> +		return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
> +	else
> +		return ixgbe_tx_done_cleanup_full(txq, free_cnt);
> +}
> +
>  static void __attribute__((cold))
>  ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
>  {
> @@ -2375,6 +2483,7 @@ static const struct ixgbe_txq_ops def_txq_ops = {
>  	.release_mbufs = ixgbe_tx_queue_release_mbufs,
>  	.free_swring = ixgbe_tx_free_swring,
>  	.reset = ixgbe_reset_tx_queue,
> +	.txq_done_cleanup = ixgbe_tx_done_cleanup,
>  };
> 
>  /* Takes an ethdev and a queue and sets up the tx function to be used based on
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 505d344b9..41a3738ce 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -238,6 +238,7 @@ struct ixgbe_txq_ops {
>  	void (*release_mbufs)(struct ixgbe_tx_queue *txq);
>  	void (*free_swring)(struct ixgbe_tx_queue *txq);
>  	void (*reset)(struct ixgbe_tx_queue *txq);
> +	int (*txq_done_cleanup)(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
>  };
> 
>  /*
> @@ -253,7 +254,6 @@ struct ixgbe_txq_ops {
>  			 IXGBE_ADVTXD_DCMD_DEXT |\
>  			 IXGBE_ADVTXD_DCMD_EOP)
> 
> -
>  /* Takes an ethdev and a queue and sets up the tx function to be used based on
>   * the queue parameters. Used in tx_queue_setup by primary process and then
>   * in dev_init by secondary process when attaching to an existing ethdev.
> @@ -285,6 +285,12 @@ int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
>  int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
>  void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
> 
> +int ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);

As a nit: I don't think you need to make these 4 functions below extrernal.
_cleanup(), cleanup_full and cleanup_simple can be static ones in ixgbe_rxtx.c

_cleanup_vec() can be static in ixgbe_rxtx_vec_common.h
BTW, I think _cleanup_vec() will be identical to cleanup_simple().

Apart, from that:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

As a side notice, I think we need to add into test-pmd ability to call/test
tx_cleanup_done (either as a separate command, or new fwd mode, or...).
But that probably subject for separate patch series. 

> +int ixgbe_tx_done_cleanup(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +int ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, uint32_t free_cnt);
> +
>  extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
>  extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
> 
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
> index feb86c61e..cd9b7dc01 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
> @@ -559,6 +559,7 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
>  	.release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
>  	.free_swring = ixgbe_tx_free_swring,
>  	.reset = ixgbe_reset_tx_queue,
> +	.txq_done_cleanup = ixgbe_tx_done_cleanup_vec,
>  };
> 
>  int __attribute__((cold))
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> index 599ba30e5..63bfac9fa 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> @@ -730,6 +730,7 @@ static const struct ixgbe_txq_ops vec_txq_ops = {
>  	.release_mbufs = ixgbe_tx_queue_release_mbufs_vec,
>  	.free_swring = ixgbe_tx_free_swring,
>  	.reset = ixgbe_reset_tx_queue,
> +	.txq_done_cleanup = ixgbe_tx_done_cleanup_vec,
>  };
> 
>  int __attribute__((cold))
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v9 0/4] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (9 preceding siblings ...)
  2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
@ 2020-01-13  9:57 ` Chenxu Di
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 1/4] net/i40e: " Chenxu Di
                     ` (3 more replies)
  2020-01-14  2:22 ` [dpdk-dev] [PATCH 0/4] drivers/net: " Ye Xiaolong
  11 siblings, 4 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-13  9:57 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the drivers inclulding i40e, ice, ixgbe 
and igb vf for the API rte_eth_tx_done_cleanup to force
 free consumed buffers on Tx ring.

---
v9:
removed function pointer in ixgbe.
changed function to static.
v8:
deleted function pointer by using other way.
v7:
changed the design of code, reuse exist function.

Chenxu Di (4):
  net/i40e: cleanup Tx buffers
  net/ice: cleanup Tx buffers
  net/ixgbe: cleanup Tx buffers
  net/e1000: cleanup Tx buffers

 drivers/net/e1000/igb_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 107 ++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 drivers/net/ice/ice_ethdev.c      |   1 +
 drivers/net/ice/ice_rxtx.c        | 111 ++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h        |   2 +
 drivers/net/ixgbe/ixgbe_ethdev.c  |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c    | 109 +++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h    |   2 +-
 11 files changed, 337 insertions(+), 1 deletion(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v9 1/4] net/i40e: cleanup Tx buffers
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
@ 2020-01-13  9:57   ` Chenxu Di
  2020-01-13 11:08     ` Ananyev, Konstantin
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 74+ messages in thread
From: Chenxu Di @ 2020-01-13  9:57 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the i40e driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   1 +
 drivers/net/i40e/i40e_ethdev_vf.c |   1 +
 drivers/net/i40e/i40e_rxtx.c      | 107 ++++++++++++++++++++++++++++++
 drivers/net/i40e/i40e_rxtx.h      |   1 +
 4 files changed, 110 insertions(+)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 5999c964b..fad47a942 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -522,6 +522,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
 	.mac_addr_set                 = i40e_set_default_mac_addr,
 	.mtu_set                      = i40e_dev_mtu_set,
 	.tm_ops_get                   = i40e_tm_ops_get,
+	.tx_done_cleanup              = i40e_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 5dba0928b..0ca5417d7 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -215,6 +215,7 @@ static const struct eth_dev_ops i40evf_eth_dev_ops = {
 	.rss_hash_conf_get    = i40evf_dev_rss_hash_conf_get,
 	.mtu_set              = i40evf_dev_mtu_set,
 	.mac_addr_set         = i40evf_set_default_mac_addr,
+	.tx_done_cleanup      = i40e_tx_done_cleanup,
 };
 
 /*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 17dc8c78f..058704c6e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2455,6 +2455,113 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 	}
 }
 
+static int
+i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	struct i40e_tx_entry *swr_ring = txq->sw_ring;
+	uint16_t i, tx_last, tx_id;
+	uint16_t nb_tx_free_last;
+	uint16_t nb_tx_to_clean;
+	uint32_t pkt_cnt;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && i40e_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (i40e_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+static int
+i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = i40e_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+static int
+i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+int
+i40e_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
+	struct i40e_adapter *ad =
+		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (ad->tx_simple_allowed) {
+		if (ad->tx_vec_allowed)
+			return i40e_tx_done_cleanup_vec(q, free_cnt);
+		else
+			return i40e_tx_done_cleanup_simple(q, free_cnt);
+	} else {
+		return i40e_tx_done_cleanup_full(q, free_cnt);
+	}
+}
+
 void
 i40e_reset_tx_queue(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 2106bb355..8f11f011a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -212,6 +212,7 @@ void i40e_dev_free_queues(struct rte_eth_dev *dev);
 void i40e_reset_rx_queue(struct i40e_rx_queue *rxq);
 void i40e_reset_tx_queue(struct i40e_tx_queue *txq);
 void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq);
+int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 1/4] net/i40e: " Chenxu Di
@ 2020-01-13  9:57   ` Chenxu Di
  2020-01-14  1:55     ` Yang, Qiming
                       ` (2 more replies)
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 4/4] net/e1000: " Chenxu Di
  3 siblings, 3 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-13  9:57 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ice driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ice/ice_ethdev.c |   1 +
 drivers/net/ice/ice_rxtx.c   | 111 +++++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_rxtx.h   |   2 +
 3 files changed, 114 insertions(+)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..b55cdbf74 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
 	.filter_ctrl                  = ice_dev_filter_ctrl,
 	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
+	.tx_done_cleanup              = ice_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2db174456..8f12df807 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -863,6 +863,9 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+
+
+
 int
 ice_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t queue_idx,
@@ -2643,6 +2646,114 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	return txq->tx_rs_thresh;
 }
 
+static int
+ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	struct ice_tx_entry *swr_ring = txq->sw_ring;
+	uint16_t i, tx_last, tx_id;
+	uint16_t nb_tx_free_last;
+	uint16_t nb_tx_to_clean;
+	uint32_t pkt_cnt;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && ice_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ice_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+static int
+ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+static int
+ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = ice_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+int
+ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
+{
+	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
+	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
+	struct ice_adapter *ad =
+		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+#ifdef RTE_ARCH_X86
+	if (ad->tx_vec_allowed)
+		return ice_tx_done_cleanup_vec(q, free_cnt);
+#endif
+	if (ad->tx_simple_allowed)
+		return ice_tx_done_cleanup_simple(q, free_cnt);
+	else
+		return ice_tx_done_cleanup_full(q, free_cnt);
+}
+
 /* Populate 4 descriptors with data from 4 mbufs */
 static inline void
 tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts)
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 9e3d2cd07..0946ee69e 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -202,4 +202,6 @@ uint16_t ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
 uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
+int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
+
 #endif /* _ICE_RXTX_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 1/4] net/i40e: " Chenxu Di
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
@ 2020-01-13  9:57   ` Chenxu Di
  2020-01-13 11:07     ` Ananyev, Konstantin
                       ` (2 more replies)
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 4/4] net/e1000: " Chenxu Di
  3 siblings, 3 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-13  9:57 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
 drivers/net/ixgbe/ixgbe_rxtx.c   | 109 +++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +-
 3 files changed, 112 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2c6fd0f13..75bdd391a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
 	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
 	.tm_ops_get           = ixgbe_tm_ops_get,
+	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
 };
 
 /*
@@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
 	.reta_query           = ixgbe_dev_rss_reta_query,
 	.rss_hash_update      = ixgbe_dev_rss_hash_update,
 	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
+	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
 };
 
 /* store statistics names and its offset in stats structure */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index fa572d184..a2e85ed5b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2306,6 +2306,115 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
 	}
 }
 
+static int
+ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
+{
+	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
+	uint16_t i, tx_last, tx_id;
+	uint16_t nb_tx_free_last;
+	uint16_t nb_tx_to_clean;
+	uint32_t pkt_cnt;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && ixgbe_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (txq->tx_rs_thresh > txq->nb_tx_desc -
+			txq->nb_tx_free || tx_id == tx_last)
+			break;
+
+		if (pkt_cnt < free_cnt) {
+			if (ixgbe_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+static int
+ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
+			break;
+
+		n = ixgbe_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+static int
+ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
+			uint32_t free_cnt __rte_unused)
+{
+	return -ENOTSUP;
+}
+
+int
+ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
+	if (txq->offloads == 0 &&
+#ifdef RTE_LIBRTE_SECURITY
+			!(txq->using_ipsec) &&
+#endif
+			txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)
+#ifdef RTE_IXGBE_INC_VECTOR
+		if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
+				(rte_eal_process_type() != RTE_PROC_PRIMARY ||
+					txq->sw_ring_v != NULL))
+			return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
+#endif
+		return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
+
+	return ixgbe_tx_done_cleanup_full(txq, free_cnt);
+}
+
 static void __attribute__((cold))
 ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq)
 {
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 505d344b9..57ff2b1a7 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -253,7 +253,6 @@ struct ixgbe_txq_ops {
 			 IXGBE_ADVTXD_DCMD_DEXT |\
 			 IXGBE_ADVTXD_DCMD_EOP)
 
-
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
@@ -284,6 +283,7 @@ uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue,
 int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq);
 void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq);
+int ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);
 
 extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX];
 extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH v9 4/4] net/e1000: cleanup Tx buffers
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
                     ` (2 preceding siblings ...)
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-13  9:57   ` Chenxu Di
  2020-01-13 11:08     ` Ananyev, Konstantin
  2020-01-14  2:49     ` Ye Xiaolong
  3 siblings, 2 replies; 74+ messages in thread
From: Chenxu Di @ 2020-01-13  9:57 UTC (permalink / raw)
  To: dev; +Cc: Yang Qiming, Chenxu Di

Add support to the igb vf for the API rte_eth_tx_done_cleanup
 to force free consumed buffers on Tx ring.

Signed-off-by: Chenxu Di <chenxux.di@intel.com>
---
 drivers/net/e1000/igb_ethdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a3e30dbe5..647d5504f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
 	.tx_queue_setup       = eth_igb_tx_queue_setup,
 	.tx_queue_release     = eth_igb_tx_queue_release,
+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
 	.rxq_info_get         = igb_rxq_info_get,
 	.txq_info_get         = igb_txq_info_get,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
@ 2020-01-13 11:07     ` Ananyev, Konstantin
  2020-01-16  8:44     ` Ferruh Yigit
  2020-01-16 14:47     ` Ferruh Yigit
  2 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-13 11:07 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> Sent: Monday, January 13, 2020 9:57 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
> 
> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 1/4] net/i40e: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 1/4] net/i40e: " Chenxu Di
@ 2020-01-13 11:08     ` Ananyev, Konstantin
  0 siblings, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-13 11:08 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenxu Di
> Sent: Monday, January 13, 2020 9:57 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX <chenxux.di@intel.com>
> Subject: [dpdk-dev] [PATCH v9 1/4] net/i40e: cleanup Tx buffers
> 
> Add support to the i40e driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 4/4] net/e1000: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 4/4] net/e1000: " Chenxu Di
@ 2020-01-13 11:08     ` Ananyev, Konstantin
  2020-01-14  2:49     ` Ye Xiaolong
  1 sibling, 0 replies; 74+ messages in thread
From: Ananyev, Konstantin @ 2020-01-13 11:08 UTC (permalink / raw)
  To: Di, ChenxuX, dev; +Cc: Yang, Qiming, Di, ChenxuX

> 
> Add support to the igb vf for the API rte_eth_tx_done_cleanup
>  to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/e1000/igb_ethdev.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index a3e30dbe5..647d5504f 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
>  	.tx_descriptor_status = eth_igb_tx_descriptor_status,
>  	.tx_queue_setup       = eth_igb_tx_queue_setup,
>  	.tx_queue_release     = eth_igb_tx_queue_release,
> +	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
>  	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
>  	.rxq_info_get         = igb_rxq_info_get,
>  	.txq_info_get         = igb_txq_info_get,
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
@ 2020-01-14  1:55     ` Yang, Qiming
  2020-01-14 12:40     ` Ferruh Yigit
  2020-01-16  8:43     ` [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers Ferruh Yigit
  2 siblings, 0 replies; 74+ messages in thread
From: Yang, Qiming @ 2020-01-14  1:55 UTC (permalink / raw)
  To: Di, ChenxuX, dev



> -----Original Message-----
> From: Di, ChenxuX <chenxux.di@intel.com>
> Sent: Monday, January 13, 2020 5:57 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>
> Subject: [PATCH v9 2/4] net/ice: cleanup Tx buffers
> 
> Add support to the ice driver for the API rte_eth_tx_done_cleanup to force free
> consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>

Acked-by: Qiming Yang <qiming.yang@intel.com>

> ---
>  drivers/net/ice/ice_ethdev.c |   1 +
>  drivers/net/ice/ice_rxtx.c   | 111 +++++++++++++++++++++++++++++++++++
>  drivers/net/ice/ice_rxtx.h   |   2 +
>  3 files changed, 114 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index
> de189daba..b55cdbf74 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
>  	.filter_ctrl                  = ice_dev_filter_ctrl,
>  	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
> +	.tx_done_cleanup              = ice_tx_done_cleanup,
>  };
> 
>  /* store statistics names and its offset in stats structure */ diff --git
> a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index
> 2db174456..8f12df807 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -863,6 +863,9 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev,
> uint16_t tx_queue_id)
>  	return 0;
>  }
> 
> +
> +
> +
>  int
>  ice_rx_queue_setup(struct rte_eth_dev *dev,
>  		   uint16_t queue_idx,
> @@ -2643,6 +2646,114 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
>  	return txq->tx_rs_thresh;
>  }
> 
> +static int
> +ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	struct ice_tx_entry *swr_ring = txq->sw_ring;
> +	uint16_t i, tx_last, tx_id;
> +	uint16_t nb_tx_free_last;
> +	uint16_t nb_tx_to_clean;
> +	uint32_t pkt_cnt;
> +
> +	/* Start free mbuf from the next of tx_tail */
> +	tx_last = txq->tx_tail;
> +	tx_id  = swr_ring[tx_last].next_id;
> +
> +	if (txq->nb_tx_free == 0 && ice_xmit_cleanup(txq))
> +		return 0;
> +
> +	nb_tx_to_clean = txq->nb_tx_free;
> +	nb_tx_free_last = txq->nb_tx_free;
> +	if (!free_cnt)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	/* Loop through swr_ring to count the amount of
> +	 * freeable mubfs and packets.
> +	 */
> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
> +		for (i = 0; i < nb_tx_to_clean &&
> +			pkt_cnt < free_cnt &&
> +			tx_id != tx_last; i++) {
> +			if (swr_ring[tx_id].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> +				swr_ring[tx_id].mbuf = NULL;
> +
> +				/*
> +				 * last segment in the packet,
> +				 * increment packet count
> +				 */
> +				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
> +			}
> +
> +			tx_id = swr_ring[tx_id].next_id;
> +		}
> +
> +		if (txq->tx_rs_thresh > txq->nb_tx_desc -
> +			txq->nb_tx_free || tx_id == tx_last)
> +			break;
> +
> +		if (pkt_cnt < free_cnt) {
> +			if (ice_xmit_cleanup(txq))
> +				break;
> +
> +			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
> +			nb_tx_free_last = txq->nb_tx_free;
> +		}
> +	}
> +
> +	return (int)pkt_cnt;
> +}
> +
> +static int
> +ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
> +			uint32_t free_cnt __rte_unused)
> +{
> +	return -ENOTSUP;
> +}
> +
> +static int
> +ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	int i, n, cnt;
> +
> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
> +
> +	for (i = 0; i < cnt; i += n) {
> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> +			break;
> +
> +		n = ice_tx_free_bufs(txq);
> +
> +		if (n == 0)
> +			break;
> +	}
> +
> +	return i;
> +}
> +
> +int
> +ice_tx_done_cleanup(void *txq, uint32_t free_cnt) {
> +	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
> +	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> +	struct ice_adapter *ad =
> +		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +
> +#ifdef RTE_ARCH_X86
> +	if (ad->tx_vec_allowed)
> +		return ice_tx_done_cleanup_vec(q, free_cnt); #endif
> +	if (ad->tx_simple_allowed)
> +		return ice_tx_done_cleanup_simple(q, free_cnt);
> +	else
> +		return ice_tx_done_cleanup_full(q, free_cnt); }
> +
>  /* Populate 4 descriptors with data from 4 mbufs */  static inline void
> tx4(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts) diff --git
> a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index
> 9e3d2cd07..0946ee69e 100644
> --- a/drivers/net/ice/ice_rxtx.h
> +++ b/drivers/net/ice/ice_rxtx.h
> @@ -202,4 +202,6 @@ uint16_t ice_recv_scattered_pkts_vec_avx2(void
> *rx_queue,  uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf
> **tx_pkts,
>  				uint16_t nb_pkts);
>  int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
> +int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
> +
>  #endif /* _ICE_RXTX_H_ */
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers
  2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
                   ` (10 preceding siblings ...)
  2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
@ 2020-01-14  2:22 ` Ye Xiaolong
  11 siblings, 0 replies; 74+ messages in thread
From: Ye Xiaolong @ 2020-01-14  2:22 UTC (permalink / raw)
  To: Chenxu Di; +Cc: dev, Yang Qiming

On 12/03, Chenxu Di wrote:
>From: Di ChenxuX <chenxux.di@intel.com>
>
>Add support to the drivers inclulding fm10k, i40e, ice, ixgbe
> for the API rte_eth_tx_done_cleanup to
> force free consumed buffers on Tx ring.
>
>Di ChenxuX (4):
>  net/fm10k: cleanup Tx buffers
>  net/i40e: cleanup Tx buffers
>  net/ice: cleanup Tx buffers
>  net/ixgbe: cleanup Tx buffers
>
> drivers/net/fm10k/fm10k.h         |  2 ++
> drivers/net/fm10k/fm10k_ethdev.c  |  1 +
> drivers/net/fm10k/fm10k_rxtx.c    | 45 +++++++++++++++++++++++++++++++
> drivers/net/i40e/i40e_ethdev.c    |  1 +
> drivers/net/i40e/i40e_ethdev_vf.c |  1 +
> drivers/net/i40e/i40e_rxtx.c      | 40 +++++++++++++++++++++++++++
> drivers/net/i40e/i40e_rxtx.h      |  1 +
> drivers/net/ice/ice_ethdev.c      |  1 +
> drivers/net/ice/ice_rxtx.c        | 41 ++++++++++++++++++++++++++++
> drivers/net/ice/ice_rxtx.h        |  1 +
> drivers/net/ixgbe/ixgbe_ethdev.c  |  2 ++
> drivers/net/ixgbe/ixgbe_rxtx.c    | 39 +++++++++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_rxtx.h    |  2 ++
> 13 files changed, 177 insertions(+)
>
>-- 
>2.17.1
>

Applied to dpdk-next-net-intel, Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 4/4] net/e1000: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 4/4] net/e1000: " Chenxu Di
  2020-01-13 11:08     ` Ananyev, Konstantin
@ 2020-01-14  2:49     ` Ye Xiaolong
  1 sibling, 0 replies; 74+ messages in thread
From: Ye Xiaolong @ 2020-01-14  2:49 UTC (permalink / raw)
  To: Chenxu Di; +Cc: dev, Yang Qiming

On 01/13, Chenxu Di wrote:
>Add support to the igb vf for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
>
>Signed-off-by: Chenxu Di <chenxux.di@intel.com>
>---
> drivers/net/e1000/igb_ethdev.c | 1 +
> 1 file changed, 1 insertion(+)
>
>diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
>index a3e30dbe5..647d5504f 100644
>--- a/drivers/net/e1000/igb_ethdev.c
>+++ b/drivers/net/e1000/igb_ethdev.c

What about em_ethdev.c in e1000 dir, do we need to add support as well?

Thanks,
Xiaolong
>@@ -446,6 +446,7 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
> 	.tx_descriptor_status = eth_igb_tx_descriptor_status,
> 	.tx_queue_setup       = eth_igb_tx_queue_setup,
> 	.tx_queue_release     = eth_igb_tx_queue_release,
>+	.tx_done_cleanup      = eth_igb_tx_done_cleanup,
> 	.set_mc_addr_list     = eth_igb_set_mc_addr_list,
> 	.rxq_info_get         = igb_rxq_info_get,
> 	.txq_info_get         = igb_txq_info_get,
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
  2020-01-14  1:55     ` Yang, Qiming
@ 2020-01-14 12:40     ` Ferruh Yigit
  2020-01-15 14:34       ` Ferruh Yigit
  2020-01-16  8:43     ` [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers Ferruh Yigit
  2 siblings, 1 reply; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-14 12:40 UTC (permalink / raw)
  To: Chenxu Di, dev; +Cc: Yang Qiming

On 1/13/2020 9:57 AM, Chenxu Di wrote:
> Add support to the ice driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>

<...>

> +static int
> +ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
> +			uint32_t free_cnt __rte_unused)
> +{
> +	return -ENOTSUP;
> +}
> +
> +static int
> +ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	int i, n, cnt;
> +
> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
> +
> +	for (i = 0; i < cnt; i += n) {
> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> +			break;
> +
> +		n = ice_tx_free_bufs(txq);
> +
> +		if (n == 0)
> +			break;
> +	}
> +
> +	return i;
> +}
> +
> +int
> +ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
> +{
> +	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
> +	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> +	struct ice_adapter *ad =
> +		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +
> +#ifdef RTE_ARCH_X86
> +	if (ad->tx_vec_allowed)
> +		return ice_tx_done_cleanup_vec(q, free_cnt);
> +#endif

Hi Chenxu,

This is causing build error for non x86 builds [1], wrapping the
'ice_tx_done_cleanup_vec()' with #ifdef can solve the error, but instead why not
remove the #ifdef completely.

Would the 'tx_vec_allowed' be set when it is non x86, I think it shouldn't, IF
so #ifdef can go away.

[1]
.../dpdk/drivers/net/ice/ice_rxtx.c:2709:1: error: ‘ice_tx_done_cleanup_vec’
defined but not used [-Werror=unused-function]
 ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
 ^~~~~~~~~~~~~~~~~~~~~~~

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-14 12:40     ` Ferruh Yigit
@ 2020-01-15 14:34       ` Ferruh Yigit
  2020-01-16  1:40         ` Di, ChenxuX
  0 siblings, 1 reply; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-15 14:34 UTC (permalink / raw)
  To: Chenxu Di, Xiaolong Ye; +Cc: dev, Yang Qiming

On 1/14/2020 12:40 PM, Ferruh Yigit wrote:
> On 1/13/2020 9:57 AM, Chenxu Di wrote:
>> Add support to the ice driver for the API rte_eth_tx_done_cleanup
>> to force free consumed buffers on Tx ring.
>>
>> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> 
> <...>
> 
>> +static int
>> +ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
>> +			uint32_t free_cnt __rte_unused)
>> +{
>> +	return -ENOTSUP;
>> +}
>> +
>> +static int
>> +ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
>> +			uint32_t free_cnt)
>> +{
>> +	int i, n, cnt;
>> +
>> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
>> +		free_cnt = txq->nb_tx_desc;
>> +
>> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
>> +
>> +	for (i = 0; i < cnt; i += n) {
>> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
>> +			break;
>> +
>> +		n = ice_tx_free_bufs(txq);
>> +
>> +		if (n == 0)
>> +			break;
>> +	}
>> +
>> +	return i;
>> +}
>> +
>> +int
>> +ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
>> +{
>> +	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
>> +	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
>> +	struct ice_adapter *ad =
>> +		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
>> +
>> +#ifdef RTE_ARCH_X86
>> +	if (ad->tx_vec_allowed)
>> +		return ice_tx_done_cleanup_vec(q, free_cnt);
>> +#endif
> 
> Hi Chenxu,
> 
> This is causing build error for non x86 builds [1], wrapping the
> 'ice_tx_done_cleanup_vec()' with #ifdef can solve the error, but instead why not
> remove the #ifdef completely.
> 
> Would the 'tx_vec_allowed' be set when it is non x86, I think it shouldn't, IF
> so #ifdef can go away.
> 
> [1]
> .../dpdk/drivers/net/ice/ice_rxtx.c:2709:1: error: ‘ice_tx_done_cleanup_vec’
> defined but not used [-Werror=unused-function]
>  ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
>  ^~~~~~~~~~~~~~~~~~~~~~~
> 

Hi Chenxu, Xiaolong,

I will fix the build error while merging, by wrapping 'ice_tx_done_cleanup_vec'
with "#ifdef RTE_ARCH_X86",
BUT can you please make an incremental patch to remove the #ifdef?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-15 14:34       ` Ferruh Yigit
@ 2020-01-16  1:40         ` Di, ChenxuX
  2020-01-16  7:09           ` [dpdk-dev] [PATCH] net/ice: cleanup for vec path check Xiaolong Ye
  0 siblings, 1 reply; 74+ messages in thread
From: Di, ChenxuX @ 2020-01-16  1:40 UTC (permalink / raw)
  To: Yigit, Ferruh, Ye, Xiaolong; +Cc: dev, Yang, Qiming



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, January 15, 2020 10:34 PM
> To: Di, ChenxuX <chenxux.di@intel.com>; Ye, Xiaolong <xiaolong.ye@intel.com>
> Cc: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
> 
> On 1/14/2020 12:40 PM, Ferruh Yigit wrote:
> > On 1/13/2020 9:57 AM, Chenxu Di wrote:
> >> Add support to the ice driver for the API rte_eth_tx_done_cleanup to
> >> force free consumed buffers on Tx ring.
> >>
> >> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> >
> > <...>
> >
> >> +static int
> >> +ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
> >> +			uint32_t free_cnt __rte_unused)
> >> +{
> >> +	return -ENOTSUP;
> >> +}
> >> +
> >> +static int
> >> +ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
> >> +			uint32_t free_cnt)
> >> +{
> >> +	int i, n, cnt;
> >> +
> >> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
> >> +		free_cnt = txq->nb_tx_desc;
> >> +
> >> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
> >> +
> >> +	for (i = 0; i < cnt; i += n) {
> >> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> >> +			break;
> >> +
> >> +		n = ice_tx_free_bufs(txq);
> >> +
> >> +		if (n == 0)
> >> +			break;
> >> +	}
> >> +
> >> +	return i;
> >> +}
> >> +
> >> +int
> >> +ice_tx_done_cleanup(void *txq, uint32_t free_cnt) {
> >> +	struct ice_tx_queue *q = (struct ice_tx_queue *)txq;
> >> +	struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> >> +	struct ice_adapter *ad =
> >> +		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> >> +
> >> +#ifdef RTE_ARCH_X86
> >> +	if (ad->tx_vec_allowed)
> >> +		return ice_tx_done_cleanup_vec(q, free_cnt); #endif
> >
> > Hi Chenxu,
> >
> > This is causing build error for non x86 builds [1], wrapping the
> > 'ice_tx_done_cleanup_vec()' with #ifdef can solve the error, but
> > instead why not remove the #ifdef completely.
> >
> > Would the 'tx_vec_allowed' be set when it is non x86, I think it
> > shouldn't, IF so #ifdef can go away.
> >
> > [1]
> > .../dpdk/drivers/net/ice/ice_rxtx.c:2709:1: error: ‘ice_tx_done_cleanup_vec’
> > defined but not used [-Werror=unused-function]
> > ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
> > ^~~~~~~~~~~~~~~~~~~~~~~
> >
> 
> Hi Chenxu, Xiaolong,
> 
> I will fix the build error while merging, by wrapping 'ice_tx_done_cleanup_vec'
> with "#ifdef RTE_ARCH_X86",
> BUT can you please make an incremental patch to remove the #ifdef?
> 
Hi, Xiaolong, Ferruh
Sorry about that, it may be an error while I delete the parentheses of the  if ... else ...

And what should I do now, update a new version patch? Or add another patch with only removing the #ifdef?


> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [dpdk-dev] [PATCH] net/ice: cleanup for vec path check
  2020-01-16  1:40         ` Di, ChenxuX
@ 2020-01-16  7:09           ` Xiaolong Ye
  2020-01-16 10:19             ` Ferruh Yigit
  2020-01-17  2:21             ` Yang, Qiming
  0 siblings, 2 replies; 74+ messages in thread
From: Xiaolong Ye @ 2020-01-16  7:09 UTC (permalink / raw)
  To: Qiming Yang, Wenzhuo Lu; +Cc: dev, ferruh.yigit, chenxux.di, Xiaolong Ye

Move the conditional compilation block to the inner check helper, so we
can reduce the number of multiple ifdef check pairs used.

Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
---
 drivers/net/ice/ice_rxtx.c            | 9 ---------
 drivers/net/ice/ice_rxtx_vec_common.h | 8 ++++++++
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 71adba809..8feeeb828 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2753,14 +2753,12 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
 	return (int)pkt_cnt;
 }
 
-#ifdef RTE_ARCH_X86
 static int
 ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
 			uint32_t free_cnt __rte_unused)
 {
 	return -ENOTSUP;
 }
-#endif
 
 static int
 ice_tx_done_cleanup_simple(struct ice_tx_queue *txq,
@@ -2794,10 +2792,8 @@ ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
 	struct ice_adapter *ad =
 		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 
-#ifdef RTE_ARCH_X86
 	if (ad->tx_vec_allowed)
 		return ice_tx_done_cleanup_vec(q, free_cnt);
-#endif
 	if (ad->tx_simple_allowed)
 		return ice_tx_done_cleanup_simple(q, free_cnt);
 	else
@@ -2953,7 +2949,6 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 	struct ice_adapter *ad =
 		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
-#ifdef RTE_ARCH_X86
 	struct ice_rx_queue *rxq;
 	int i;
 	bool use_avx2 = false;
@@ -2998,8 +2993,6 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 		return;
 	}
 
-#endif
-
 	if (dev->data->scattered_rx) {
 		/* Set the non-LRO scattered function */
 		PMD_INIT_LOG(DEBUG,
@@ -3131,7 +3124,6 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct ice_adapter *ad =
 		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
-#ifdef RTE_ARCH_X86
 	struct ice_tx_queue *txq;
 	int i;
 	bool use_avx2 = false;
@@ -3167,7 +3159,6 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 
 		return;
 	}
-#endif
 
 	if (ad->tx_simple_allowed) {
 		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 6b57ff2ae..223aac878 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -267,6 +267,7 @@ ice_tx_vec_queue_default(struct ice_tx_queue *txq)
 static inline int
 ice_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+#ifdef RTE_ARCH_X86
 	int i;
 	struct ice_rx_queue *rxq;
 	struct ice_adapter *ad =
@@ -283,11 +284,15 @@ ice_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 	}
 
 	return 0;
+#else
+	return -1;
+#endif
 }
 
 static inline int
 ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+#ifdef RTE_ARCH_X86
 	int i;
 	struct ice_tx_queue *txq;
 
@@ -298,6 +303,9 @@ ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
 	}
 
 	return 0;
+#else
+	return -1;
+#endif
 }
 
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
  2020-01-14  1:55     ` Yang, Qiming
  2020-01-14 12:40     ` Ferruh Yigit
@ 2020-01-16  8:43     ` Ferruh Yigit
  2 siblings, 0 replies; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-16  8:43 UTC (permalink / raw)
  To: Chenxu Di, dev; +Cc: Yang Qiming

On 1/13/2020 9:57 AM, Chenxu Di wrote:
> Add support to the ice driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ice/ice_ethdev.c |   1 +
>  drivers/net/ice/ice_rxtx.c   | 111 +++++++++++++++++++++++++++++++++++
>  drivers/net/ice/ice_rxtx.h   |   2 +
>  3 files changed, 114 insertions(+)
> 
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index de189daba..b55cdbf74 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -220,6 +220,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
>  	.filter_ctrl                  = ice_dev_filter_ctrl,
>  	.udp_tunnel_port_add          = ice_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del          = ice_dev_udp_tunnel_port_del,
> +	.tx_done_cleanup              = ice_tx_done_cleanup,
>  };
>  
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 2db174456..8f12df807 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -863,6 +863,9 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
>  	return 0;
>  }
>  
> +
> +
> +

These empty lines removed on next-net, fyi.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
  2020-01-13 11:07     ` Ananyev, Konstantin
@ 2020-01-16  8:44     ` Ferruh Yigit
  2020-01-16 14:47     ` Ferruh Yigit
  2 siblings, 0 replies; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-16  8:44 UTC (permalink / raw)
  To: Chenxu Di, dev; +Cc: Yang Qiming

On 1/13/2020 9:57 AM, Chenxu Di wrote:
> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>

<...>

> @@ -253,7 +253,6 @@ struct ixgbe_txq_ops {
>  			 IXGBE_ADVTXD_DCMD_DEXT |\
>  			 IXGBE_ADVTXD_DCMD_EOP)
>  
> -
>  /* Takes an ethdev and a queue and sets up the tx function to be used based on
>   * the queue parameters. Used in tx_queue_setup by primary process and then
>   * in dev_init by secondary process when attaching to an existing ethdev.

Also this change has been removed in next-net, it is irrelevantt to the pathch.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH] net/ice: cleanup for vec path check
  2020-01-16  7:09           ` [dpdk-dev] [PATCH] net/ice: cleanup for vec path check Xiaolong Ye
@ 2020-01-16 10:19             ` Ferruh Yigit
  2020-01-17  2:21             ` Yang, Qiming
  1 sibling, 0 replies; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-16 10:19 UTC (permalink / raw)
  To: Xiaolong Ye, Qiming Yang, Wenzhuo Lu; +Cc: dev, chenxux.di

On 1/16/2020 7:09 AM, Xiaolong Ye wrote:
> Move the conditional compilation block to the inner check helper, so we
> can reduce the number of multiple ifdef check pairs used.
> 
> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>

<...>
	
> @@ -2794,10 +2792,8 @@ ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
>  	struct ice_adapter *ad =
>  		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
>  
> -#ifdef RTE_ARCH_X86
>  	if (ad->tx_vec_allowed)
>  		return ice_tx_done_cleanup_vec(q, free_cnt);
> -#endif
>  	if (ad->tx_simple_allowed)
>  		return ice_tx_done_cleanup_simple(q, free_cnt);
>  	else
> @@ -2953,7 +2949,6 @@ ice_set_rx_function(struct rte_eth_dev *dev)
>  	PMD_INIT_FUNC_TRACE();
>  	struct ice_adapter *ad =
>  		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> -#ifdef RTE_ARCH_X86
>  	struct ice_rx_queue *rxq;
>  	int i;
>  	bool use_avx2 = false;


The build is still failing for arm, some defines like 'RTE_CPUFLAG_AVX2' &
'RTE_CPUFLAG_AVX512F' or functions 'ice_rxq_vec_setup', 'ice_recv_pkts_vec',
'ice_recv_scattered_pkts_vec' etc only defined for x86

It looks like more work is required, to created dummy versions of these failin
functions also moving 'ice_rx_vec_dev_check()' form 'ice_rxtx_vec_sse.c' to
'ice_rxtx.c', otherwise we are having chicken-egg problem. So needs something
similar to done in i40e.

If it is too much work for rc1, we can with existing #ifdef for now, up to you.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
  2020-01-13 11:07     ` Ananyev, Konstantin
  2020-01-16  8:44     ` Ferruh Yigit
@ 2020-01-16 14:47     ` Ferruh Yigit
  2020-01-16 15:23       ` Ferruh Yigit
  2 siblings, 1 reply; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-16 14:47 UTC (permalink / raw)
  To: Chenxu Di, dev; +Cc: Yang Qiming

On 1/13/2020 9:57 AM, Chenxu Di wrote:
> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
> to force free consumed buffers on Tx ring.
> 
> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
> ---
>  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
>  drivers/net/ixgbe/ixgbe_rxtx.c   | 109 +++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +-
>  3 files changed, 112 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 2c6fd0f13..75bdd391a 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
>  	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
>  	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
>  	.tm_ops_get           = ixgbe_tm_ops_get,
> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>  };
>  
>  /*
> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
>  	.reta_query           = ixgbe_dev_rss_reta_query,
>  	.rss_hash_update      = ixgbe_dev_rss_hash_update,
>  	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>  };
>  
>  /* store statistics names and its offset in stats structure */
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index fa572d184..a2e85ed5b 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2306,6 +2306,115 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
>  	}
>  }
>  
> +static int
> +ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
> +{
> +	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
> +	uint16_t i, tx_last, tx_id;
> +	uint16_t nb_tx_free_last;
> +	uint16_t nb_tx_to_clean;
> +	uint32_t pkt_cnt;
> +
> +	/* Start free mbuf from the next of tx_tail */
> +	tx_last = txq->tx_tail;
> +	tx_id  = swr_ring[tx_last].next_id;
> +
> +	if (txq->nb_tx_free == 0 && ixgbe_xmit_cleanup(txq))
> +		return 0;
> +
> +	nb_tx_to_clean = txq->nb_tx_free;
> +	nb_tx_free_last = txq->nb_tx_free;
> +	if (!free_cnt)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	/* Loop through swr_ring to count the amount of
> +	 * freeable mubfs and packets.
> +	 */
> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
> +		for (i = 0; i < nb_tx_to_clean &&
> +			pkt_cnt < free_cnt &&
> +			tx_id != tx_last; i++) {
> +			if (swr_ring[tx_id].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
> +				swr_ring[tx_id].mbuf = NULL;
> +
> +				/*
> +				 * last segment in the packet,
> +				 * increment packet count
> +				 */
> +				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
> +			}
> +
> +			tx_id = swr_ring[tx_id].next_id;
> +		}
> +
> +		if (txq->tx_rs_thresh > txq->nb_tx_desc -
> +			txq->nb_tx_free || tx_id == tx_last)
> +			break;
> +
> +		if (pkt_cnt < free_cnt) {
> +			if (ixgbe_xmit_cleanup(txq))
> +				break;
> +
> +			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
> +			nb_tx_free_last = txq->nb_tx_free;
> +		}
> +	}
> +
> +	return (int)pkt_cnt;
> +}
> +
> +static int
> +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
> +			uint32_t free_cnt)
> +{
> +	int i, n, cnt;
> +
> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
> +		free_cnt = txq->nb_tx_desc;
> +
> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
> +
> +	for (i = 0; i < cnt; i += n) {
> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
> +			break;
> +
> +		n = ixgbe_tx_free_bufs(txq);
> +
> +		if (n == 0)
> +			break;
> +	}
> +
> +	return i;
> +}
> +
> +static int
> +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
> +			uint32_t free_cnt __rte_unused)
> +{
> +	return -ENOTSUP;
> +}
> +
> +int
> +ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
> +{
> +	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
> +	if (txq->offloads == 0 &&
> +#ifdef RTE_LIBRTE_SECURITY
> +			!(txq->using_ipsec) &&
> +#endif
> +			txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)
> +#ifdef RTE_IXGBE_INC_VECTOR
> +		if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
> +				(rte_eal_process_type() != RTE_PROC_PRIMARY ||
> +					txq->sw_ring_v != NULL))
> +			return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
> +#endif
> +		return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
> +
> +	return ixgbe_tx_done_cleanup_full(txq, free_cnt);
> +}
> +

Missing curly parantheses in the 'if' blocks are causing confusion on which
return patch to take.

the above code is like this:
if (txq->offloads == 0 && ...)
  if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ && ...)
    return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
  return ixgbe_tx_done_cleanup_simple(txq, free_cnt); <----- [*]
return ixgbe_tx_done_cleanup_full(txq, free_cnt);

It is not clear, and looks like wrong based on indentation, when to get the [*]
path above.

I will add curly parantheses while merging.

Btw, why we need "#ifdef RTE_IXGBE_INC_VECTOR" here, can't we remove it?

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH v9 3/4] net/ixgbe: cleanup Tx buffers
  2020-01-16 14:47     ` Ferruh Yigit
@ 2020-01-16 15:23       ` Ferruh Yigit
  0 siblings, 0 replies; 74+ messages in thread
From: Ferruh Yigit @ 2020-01-16 15:23 UTC (permalink / raw)
  To: Chenxu Di, dev; +Cc: Yang Qiming

On 1/16/2020 2:47 PM, Ferruh Yigit wrote:
> On 1/13/2020 9:57 AM, Chenxu Di wrote:
>> Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
>> to force free consumed buffers on Tx ring.
>>
>> Signed-off-by: Chenxu Di <chenxux.di@intel.com>
>> ---
>>  drivers/net/ixgbe/ixgbe_ethdev.c |   2 +
>>  drivers/net/ixgbe/ixgbe_rxtx.c   | 109 +++++++++++++++++++++++++++++++
>>  drivers/net/ixgbe/ixgbe_rxtx.h   |   2 +-
>>  3 files changed, 112 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
>> index 2c6fd0f13..75bdd391a 100644
>> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
>> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
>> @@ -601,6 +601,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
>>  	.udp_tunnel_port_add  = ixgbe_dev_udp_tunnel_port_add,
>>  	.udp_tunnel_port_del  = ixgbe_dev_udp_tunnel_port_del,
>>  	.tm_ops_get           = ixgbe_tm_ops_get,
>> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>>  };
>>  
>>  /*
>> @@ -649,6 +650,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
>>  	.reta_query           = ixgbe_dev_rss_reta_query,
>>  	.rss_hash_update      = ixgbe_dev_rss_hash_update,
>>  	.rss_hash_conf_get    = ixgbe_dev_rss_hash_conf_get,
>> +	.tx_done_cleanup      = ixgbe_dev_tx_done_cleanup,
>>  };
>>  
>>  /* store statistics names and its offset in stats structure */
>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
>> index fa572d184..a2e85ed5b 100644
>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
>> @@ -2306,6 +2306,115 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq)
>>  	}
>>  }
>>  
>> +static int
>> +ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt)
>> +{
>> +	struct ixgbe_tx_entry *swr_ring = txq->sw_ring;
>> +	uint16_t i, tx_last, tx_id;
>> +	uint16_t nb_tx_free_last;
>> +	uint16_t nb_tx_to_clean;
>> +	uint32_t pkt_cnt;
>> +
>> +	/* Start free mbuf from the next of tx_tail */
>> +	tx_last = txq->tx_tail;
>> +	tx_id  = swr_ring[tx_last].next_id;
>> +
>> +	if (txq->nb_tx_free == 0 && ixgbe_xmit_cleanup(txq))
>> +		return 0;
>> +
>> +	nb_tx_to_clean = txq->nb_tx_free;
>> +	nb_tx_free_last = txq->nb_tx_free;
>> +	if (!free_cnt)
>> +		free_cnt = txq->nb_tx_desc;
>> +
>> +	/* Loop through swr_ring to count the amount of
>> +	 * freeable mubfs and packets.
>> +	 */
>> +	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
>> +		for (i = 0; i < nb_tx_to_clean &&
>> +			pkt_cnt < free_cnt &&
>> +			tx_id != tx_last; i++) {
>> +			if (swr_ring[tx_id].mbuf != NULL) {
>> +				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
>> +				swr_ring[tx_id].mbuf = NULL;
>> +
>> +				/*
>> +				 * last segment in the packet,
>> +				 * increment packet count
>> +				 */
>> +				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
>> +			}
>> +
>> +			tx_id = swr_ring[tx_id].next_id;
>> +		}
>> +
>> +		if (txq->tx_rs_thresh > txq->nb_tx_desc -
>> +			txq->nb_tx_free || tx_id == tx_last)
>> +			break;
>> +
>> +		if (pkt_cnt < free_cnt) {
>> +			if (ixgbe_xmit_cleanup(txq))
>> +				break;
>> +
>> +			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
>> +			nb_tx_free_last = txq->nb_tx_free;
>> +		}
>> +	}
>> +
>> +	return (int)pkt_cnt;
>> +}
>> +
>> +static int
>> +ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq,
>> +			uint32_t free_cnt)
>> +{
>> +	int i, n, cnt;
>> +
>> +	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
>> +		free_cnt = txq->nb_tx_desc;
>> +
>> +	cnt = free_cnt - free_cnt % txq->tx_rs_thresh;
>> +
>> +	for (i = 0; i < cnt; i += n) {
>> +		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_rs_thresh)
>> +			break;
>> +
>> +		n = ixgbe_tx_free_bufs(txq);
>> +
>> +		if (n == 0)
>> +			break;
>> +	}
>> +
>> +	return i;
>> +}
>> +
>> +static int
>> +ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused,
>> +			uint32_t free_cnt __rte_unused)
>> +{
>> +	return -ENOTSUP;
>> +}
>> +
>> +int
>> +ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
>> +{
>> +	struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
>> +	if (txq->offloads == 0 &&
>> +#ifdef RTE_LIBRTE_SECURITY
>> +			!(txq->using_ipsec) &&
>> +#endif
>> +			txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)
>> +#ifdef RTE_IXGBE_INC_VECTOR
>> +		if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
>> +				(rte_eal_process_type() != RTE_PROC_PRIMARY ||
>> +					txq->sw_ring_v != NULL))
>> +			return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
>> +#endif
>> +		return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
>> +
>> +	return ixgbe_tx_done_cleanup_full(txq, free_cnt);
>> +}
>> +
> 
> Missing curly parantheses in the 'if' blocks are causing confusion on which
> return patch to take.
> 
> the above code is like this:
> if (txq->offloads == 0 && ...)
>   if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ && ...)
>     return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
>   return ixgbe_tx_done_cleanup_simple(txq, free_cnt); <----- [*]
> return ixgbe_tx_done_cleanup_full(txq, free_cnt);
> 
> It is not clear, and looks like wrong based on indentation, when to get the [*]
> path above.
> 
> I will add curly parantheses while merging.
> 
> Btw, why we need "#ifdef RTE_IXGBE_INC_VECTOR" here, can't we remove it?
> 

Since 'ixgbe_tx_done_cleanup_vec()' already implemented in this file, instead of
vector specific files, I am removing the ifdef.

So making changes [1] and function becomes [2]. Please validate it in next-net.

[1]
 diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
 index a2e85ed5b..f41dc13d5 100644
 --- a/drivers/net/ixgbe/ixgbe_rxtx.c
 +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
 @@ -2403,14 +2403,15 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t
free_cnt)
  #ifdef RTE_LIBRTE_SECURITY
                         !(txq->using_ipsec) &&
  #endif
 -                       txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)
 -#ifdef RTE_IXGBE_INC_VECTOR
 +                       txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) {
                 if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
                                 (rte_eal_process_type() != RTE_PROC_PRIMARY ||
 -                                       txq->sw_ring_v != NULL))
 +                                       txq->sw_ring_v != NULL)) {
                         return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
 -#endif
 -               return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
 +               } else {
 +                       return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
 +               }
 +       }

         return ixgbe_tx_done_cleanup_full(txq, free_cnt);
  }


[2]
 int
 ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
 {
         struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue;
         if (txq->offloads == 0 &&
 #ifdef RTE_LIBRTE_SECURITY
                         !(txq->using_ipsec) &&
 #endif
                         txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST) {
                 if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ &&
                                 (rte_eal_process_type() != RTE_PROC_PRIMARY ||
                                         txq->sw_ring_v != NULL)) {
                         return ixgbe_tx_done_cleanup_vec(txq, free_cnt);
                 } else {
                         return ixgbe_tx_done_cleanup_simple(txq, free_cnt);
                 }
         }

         return ixgbe_tx_done_cleanup_full(txq, free_cnt);
 }

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [dpdk-dev] [PATCH] net/ice: cleanup for vec path check
  2020-01-16  7:09           ` [dpdk-dev] [PATCH] net/ice: cleanup for vec path check Xiaolong Ye
  2020-01-16 10:19             ` Ferruh Yigit
@ 2020-01-17  2:21             ` Yang, Qiming
  1 sibling, 0 replies; 74+ messages in thread
From: Yang, Qiming @ 2020-01-17  2:21 UTC (permalink / raw)
  To: Ye, Xiaolong, Lu, Wenzhuo; +Cc: dev, Yigit, Ferruh, Di, ChenxuX



> -----Original Message-----
> From: Ye, Xiaolong <xiaolong.ye@intel.com>
> Sent: Thursday, January 16, 2020 3:10 PM
> To: Yang, Qiming <qiming.yang@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Di, ChenxuX
> <chenxux.di@intel.com>; Ye, Xiaolong <xiaolong.ye@intel.com>
> Subject: [PATCH] net/ice: cleanup for vec path check
> 
> Move the conditional compilation block to the inner check helper, so we can
> reduce the number of multiple ifdef check pairs used.
> 
> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>

Acked-by: Qiming Yang <qiming.yang@intel.com>

> ---
>  drivers/net/ice/ice_rxtx.c            | 9 ---------
>  drivers/net/ice/ice_rxtx_vec_common.h | 8 ++++++++
>  2 files changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index
> 71adba809..8feeeb828 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -2753,14 +2753,12 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq,
>  	return (int)pkt_cnt;
>  }
> 
> -#ifdef RTE_ARCH_X86
>  static int
>  ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused,
>  			uint32_t free_cnt __rte_unused)
>  {
>  	return -ENOTSUP;
>  }
> -#endif
> 
>  static int
>  ice_tx_done_cleanup_simple(struct ice_tx_queue *txq, @@ -2794,10 +2792,8
> @@ ice_tx_done_cleanup(void *txq, uint32_t free_cnt)
>  	struct ice_adapter *ad =
>  		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> 
> -#ifdef RTE_ARCH_X86
>  	if (ad->tx_vec_allowed)
>  		return ice_tx_done_cleanup_vec(q, free_cnt); -#endif
>  	if (ad->tx_simple_allowed)
>  		return ice_tx_done_cleanup_simple(q, free_cnt);
>  	else
> @@ -2953,7 +2949,6 @@ ice_set_rx_function(struct rte_eth_dev *dev)
>  	PMD_INIT_FUNC_TRACE();
>  	struct ice_adapter *ad =
>  		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> -#ifdef RTE_ARCH_X86
>  	struct ice_rx_queue *rxq;
>  	int i;
>  	bool use_avx2 = false;
> @@ -2998,8 +2993,6 @@ ice_set_rx_function(struct rte_eth_dev *dev)
>  		return;
>  	}
> 
> -#endif
> -
>  	if (dev->data->scattered_rx) {
>  		/* Set the non-LRO scattered function */
>  		PMD_INIT_LOG(DEBUG,
> @@ -3131,7 +3124,6 @@ ice_set_tx_function(struct rte_eth_dev *dev)  {
>  	struct ice_adapter *ad =
>  		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> -#ifdef RTE_ARCH_X86
>  	struct ice_tx_queue *txq;
>  	int i;
>  	bool use_avx2 = false;
> @@ -3167,7 +3159,6 @@ ice_set_tx_function(struct rte_eth_dev *dev)
> 
>  		return;
>  	}
> -#endif
> 
>  	if (ad->tx_simple_allowed) {
>  		PMD_INIT_LOG(DEBUG, "Simple tx finally be used."); diff --git
> a/drivers/net/ice/ice_rxtx_vec_common.h
> b/drivers/net/ice/ice_rxtx_vec_common.h
> index 6b57ff2ae..223aac878 100644
> --- a/drivers/net/ice/ice_rxtx_vec_common.h
> +++ b/drivers/net/ice/ice_rxtx_vec_common.h
> @@ -267,6 +267,7 @@ ice_tx_vec_queue_default(struct ice_tx_queue *txq)
> static inline int  ice_rx_vec_dev_check_default(struct rte_eth_dev *dev)  {
> +#ifdef RTE_ARCH_X86
>  	int i;
>  	struct ice_rx_queue *rxq;
>  	struct ice_adapter *ad =
> @@ -283,11 +284,15 @@ ice_rx_vec_dev_check_default(struct rte_eth_dev
> *dev)
>  	}
> 
>  	return 0;
> +#else
> +	return -1;
> +#endif
>  }
> 
>  static inline int
>  ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)  {
> +#ifdef RTE_ARCH_X86
>  	int i;
>  	struct ice_tx_queue *txq;
> 
> @@ -298,6 +303,9 @@ ice_tx_vec_dev_check_default(struct rte_eth_dev *dev)
>  	}
> 
>  	return 0;
> +#else
> +	return -1;
> +#endif
>  }
> 
>  #endif
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 74+ messages in thread

end of thread, other threads:[~2020-01-17  2:21 UTC | newest]

Thread overview: 74+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-03  5:51 [dpdk-dev] [PATCH 0/4] drivers/net: cleanup Tx buffers Chenxu Di
2019-12-03  5:51 ` [dpdk-dev] [PATCH 1/4] net/fm10k: " Chenxu Di
2019-12-03  5:51 ` [dpdk-dev] [PATCH 2/4] net/i40e: " Chenxu Di
2019-12-03  5:51 ` [dpdk-dev] [PATCH 3/4] net/ice: " Chenxu Di
2019-12-03  5:51 ` [dpdk-dev] [PATCH 4/4] net/ixgbe: " Chenxu Di
2019-12-20  3:02 ` [dpdk-dev] [PATCH v2 0/5] drivers/net: " Chenxu Di
2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 1/5] net/fm10k: " Chenxu Di
2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 2/5] net/i40e: " Chenxu Di
2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 3/5] net/ice: " Chenxu Di
2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 4/5] net/ixgbe: " Chenxu Di
2019-12-20  3:02   ` [dpdk-dev] [PATCH v2 5/5] net/e1000: " Chenxu Di
2019-12-20  3:15 ` [dpdk-dev] [PATCH v3 0/5] drivers/net: " Chenxu Di
2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 1/5] net/fm10k: " Chenxu Di
2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 2/5] net/i40e: " Chenxu Di
2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 3/5] net/ice: " Chenxu Di
2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 4/5] net/ixgbe: " Chenxu Di
2019-12-20  3:15   ` [dpdk-dev] [PATCH v3 5/5] net/e1000: " Chenxu Di
2019-12-24  2:39 ` [dpdk-dev] [PATCH v4 0/5] drivers/net: " Chenxu Di
2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 1/5] net/fm10k: " Chenxu Di
2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 2/5] net/i40e: " Chenxu Di
2019-12-26  8:24     ` Xing, Beilei
2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 3/5] net/ice: " Chenxu Di
2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 4/5] net/ixgbe: " Chenxu Di
2019-12-24  2:39   ` [dpdk-dev] [PATCH v4 5/5] net/e1000: " Chenxu Di
2019-12-30  9:38 ` [dpdk-dev] [PATCH v6 0/4] drivers/net: " Chenxu Di
2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 1/4] net/i40e: " Chenxu Di
2019-12-30 13:01     ` Ananyev, Konstantin
2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 2/4] net/ice: " Chenxu Di
2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 3/4] net/ixgbe: " Chenxu Di
2019-12-30 12:53     ` Ananyev, Konstantin
2020-01-03  9:01       ` Di, ChenxuX
2020-01-05 23:36         ` Ananyev, Konstantin
2020-01-06  9:03           ` Di, ChenxuX
2020-01-06 13:26             ` Ananyev, Konstantin
2020-01-07 10:46               ` Di, ChenxuX
2020-01-07 14:09                 ` Ananyev, Konstantin
2020-01-08 10:15                   ` Di, ChenxuX
2020-01-08 15:12                     ` Ananyev, Konstantin
2019-12-30  9:38   ` [dpdk-dev] [PATCH v6 4/4] net/e1000: " Chenxu Di
2020-01-09 10:38 ` [dpdk-dev] [PATCH v7 0/4] drivers/net: " Chenxu Di
2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 1/4] net/i40e: " Chenxu Di
2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 2/4] net/ice: " Chenxu Di
2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 3/4] net/ixgbe: " Chenxu Di
2020-01-09 14:01     ` Ananyev, Konstantin
2020-01-10 10:08       ` Di, ChenxuX
2020-01-10 12:46         ` Ananyev, Konstantin
2020-01-09 10:38   ` [dpdk-dev] [PATCH v7 4/4] net/e1000: " Chenxu Di
2020-01-10  9:58 ` [dpdk-dev] [PATCH v8 0/4] drivers/net: " Chenxu Di
2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 1/4] net/i40e: " Chenxu Di
2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 2/4] net/ice: " Chenxu Di
2020-01-10  9:58   ` [dpdk-dev] [PATCH v8 3/4] net/ixgbe: " Chenxu Di
2020-01-10 13:49     ` Ananyev, Konstantin
2020-01-10  9:59   ` [dpdk-dev] [PATCH v8 4/4] net/e1000: " Chenxu Di
2020-01-13  9:57 ` [dpdk-dev] [PATCH v9 0/4] drivers/net: " Chenxu Di
2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 1/4] net/i40e: " Chenxu Di
2020-01-13 11:08     ` Ananyev, Konstantin
2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 2/4] net/ice: " Chenxu Di
2020-01-14  1:55     ` Yang, Qiming
2020-01-14 12:40     ` Ferruh Yigit
2020-01-15 14:34       ` Ferruh Yigit
2020-01-16  1:40         ` Di, ChenxuX
2020-01-16  7:09           ` [dpdk-dev] [PATCH] net/ice: cleanup for vec path check Xiaolong Ye
2020-01-16 10:19             ` Ferruh Yigit
2020-01-17  2:21             ` Yang, Qiming
2020-01-16  8:43     ` [dpdk-dev] [PATCH v9 2/4] net/ice: cleanup Tx buffers Ferruh Yigit
2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 3/4] net/ixgbe: " Chenxu Di
2020-01-13 11:07     ` Ananyev, Konstantin
2020-01-16  8:44     ` Ferruh Yigit
2020-01-16 14:47     ` Ferruh Yigit
2020-01-16 15:23       ` Ferruh Yigit
2020-01-13  9:57   ` [dpdk-dev] [PATCH v9 4/4] net/e1000: " Chenxu Di
2020-01-13 11:08     ` Ananyev, Konstantin
2020-01-14  2:49     ` Ye Xiaolong
2020-01-14  2:22 ` [dpdk-dev] [PATCH 0/4] drivers/net: " Ye Xiaolong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).