* [PATCH] ethdev: introduce generic dummy packet burst function
@ 2022-02-08 19:44 Ferruh Yigit
2022-02-10 7:38 ` Loftus, Ciara
` (6 more replies)
0 siblings, 7 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-08 19:44 UTC (permalink / raw)
To: Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko
Cc: dev, Ferruh Yigit, Ciara Loftus
Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
These dummy functions are very simple, introduce a common function in
the ethdev and update drivers to use it instead of each driver having
its own functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Ciara Loftus <ciara.loftus@intel.com>
---
drivers/net/ark/ark_ethdev.c | 8 ++---
drivers/net/ark/ark_ethdev_rx.c | 9 -----
drivers/net/ark/ark_ethdev_rx.h | 2 --
drivers/net/ark/ark_ethdev_tx.c | 9 -----
drivers/net/ark/ark_ethdev_tx.h | 3 --
drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++-----
drivers/net/bnxt/bnxt.h | 4 ---
drivers/net/bnxt/bnxt_cpr.c | 4 +--
drivers/net/bnxt/bnxt_rxr.c | 14 --------
drivers/net/bnxt/bnxt_txr.c | 14 --------
drivers/net/cnxk/cnxk_ethdev.c | 14 ++------
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 25 --------------
drivers/net/enic/enic.h | 3 --
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_rxtx.c | 11 ------
drivers/net/hns3/hns3_rxtx.c | 18 +++-------
drivers/net/hns3/hns3_rxtx.h | 3 --
drivers/net/mlx4/mlx4.c | 8 ++---
drivers/net/mlx4/mlx4_mp.c | 4 +--
drivers/net/mlx4/mlx4_rxtx.c | 52 -----------------------------
drivers/net/mlx4/mlx4_rxtx.h | 4 ---
drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +--
drivers/net/mlx5/linux/mlx5_os.c | 4 +--
drivers/net/mlx5/mlx5.c | 4 +--
drivers/net/mlx5/mlx5_rx.c | 27 +--------------
drivers/net/mlx5/mlx5_rx.h | 2 --
drivers/net/mlx5/mlx5_trigger.c | 4 +--
drivers/net/mlx5/mlx5_tx.c | 25 --------------
drivers/net/mlx5/mlx5_tx.h | 2 --
drivers/net/mlx5/windows/mlx5_os.c | 4 +--
drivers/net/pfe/pfe_ethdev.c | 20 ++---------
drivers/net/qede/qede_ethdev.c | 4 +--
drivers/net/qede/qede_rxtx.c | 9 -----
drivers/net/qede/qede_rxtx.h | 3 --
lib/ethdev/ethdev_driver.h | 19 +++++++++++
38 files changed, 58 insertions(+), 301 deletions(-)
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3f023..230a1272e986 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Use dummy function until setup */
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
@@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
if (ark->start_pg)
ark_pktgen_pause(ark->pg);
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* STOP TX Side */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index 98658ce621e2..37a88cbedee4 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_recv_pkts_noop(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 859fcf1e6f71..f64b3dd137b3 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
-uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void eth_ark_dev_rx_queue_release(void *rx_queue);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 676e4115d3bf..abdce6a8cc0d 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue,
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_xmit_pkts_noop(void *vtxq __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 12c71a7158a9..7134dbfeed81 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -10,9 +10,6 @@
#include <ethdev_driver.h>
-uint16_t eth_ark_xmit_pkts_noop(void *vtxq,
- struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_xmit_pkts(void *vtxq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 66b0512c8695..cb5733c5972b 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
-static uint16_t
-bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy;
- dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
}
void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 433f1c80bee8..851b3bb2be2a 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
int wait_to_complete);
-uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
-uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
extern const struct rte_flow_ops bnxt_flow_ops;
diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 9b9285b79903..99af0f9e87ee 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp)
void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
{
- eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b60c2470f39e..5a9cf48e6739 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx_pkts;
}
-/*
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_recv_pkts(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
void bnxt_free_rx_rings(struct bnxt *bp)
{
int i;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 3b8f2382f92e..7a7196a23731 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx_pkts;
}
-/*
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct bnxt *bp = dev->data->dev_private;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 53dfb5eae80e..c6a9ada05bb4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
return rc;
}
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
static void
nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
@@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
* which caused app crash since rx/tx burst is still
* on different lcores
*/
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 379daec5f4e8..5be4fef8fe68 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
}
/*changing tx burst function to avoid any more enqueues */
- dev->tx_pkt_burst = dummy_dev_tx;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* Loop while dpni_disable() attempts to drain the egress FQs
* and confirm them back to us.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 1b49f43103a7..e79a7fc2e286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -264,7 +264,6 @@ __rte_internal
uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
struct rte_mbuf **bufs, uint16_t nb_pkts);
-uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 81b28e20cb47..b8844fbdf107 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
-{
- (void)queue;
- (void)bufs;
- (void)nb_pkts;
- return 0;
-}
-
#if defined(RTE_TOOLCHAIN_GCC)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d5493c98345d..163a1f037e26 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t enic_dummy_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 163be09809b1..a8d470e8ac93 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev)
RTE_PTYPE_UNKNOWN
};
- if (dev->rx_pkt_burst != enic_dummy_recv_pkts &&
+ if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy &&
dev->rx_pkt_burst != NULL) {
struct enic *enic = pmd_priv(dev);
if (enic->overlay_offload)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 97d97ea793f2..9f351de72eb4 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
/* replace Rx function with a no-op to avoid getting stale pkts */
- eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 74a90694c718..7a66d72275d9 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -31,17 +31,6 @@
#define rte_packet_prefetch(p) do {} while (0)
#endif
-/* dummy receive function to replace actual function in
- * order to do safe reconfiguration operations.
- */
-uint16_t
-enic_dummy_recv_pkts(__rte_unused void *rx_queue,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static inline uint16_t
enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts, const bool use_64b_desc)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3b72c2375a60..8dc6cfac704d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
return hns3_xmit_pkts;
}
-uint16_t
-hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- return 0;
-}
-
static void
hns3_trace_rxtx_function(struct rte_eth_dev *dev)
{
@@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev);
eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status;
eth_dev->tx_pkt_burst = hw->set_link_down ?
- hns3_dummy_rxtx_burst :
+ rte_eth_pkt_burst_dummy :
hns3_get_tx_function(eth_dev, &prep);
eth_dev->tx_pkt_prepare = prep;
eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status;
hns3_trace_rxtx_function(eth_dev);
} else {
- eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
- eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_prepare = NULL;
}
@@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts)
return hns3_tx_done_cleanup_full(q, free_cnt);
- else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst)
+ else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy)
return 0;
else
return -ENOTSUP;
@@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw)
void
hns3_stop_tx_datapath(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 094b65b7de70..a000318357ab 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev);
void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev,
eth_tx_prep_t *prep);
-uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id);
void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id,
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 3f3c4a7c7214..910b76a92c42 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
return 0;
DEBUG("%p: detaching flows from all RX queues", (void *)dev);
priv->started = 0;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
@@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
DEBUG("%p: closing device \"%s\"",
(void *)dev,
((priv->ctx != NULL) ? priv->ctx->device->name : ""));
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
index 8fcfb5490ee9..1da64910aadd 100644
--- a/drivers/net/mlx4/mlx4_mp.c
+++ b/drivers/net/mlx4/mlx4_mp.c
@@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
break;
case MLX4_MP_REQ_STOP_RXTX:
INFO("port %u stopping datapath", dev->data->port_id);
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(dev, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ed9e41fcdea9..059e432a63fc 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq->stats.ipackets += i;
return i;
}
-
-/**
- * Dummy DPDK callback for Tx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to Tx queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_txq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
-
-/**
- * Dummy DPDK callback for Rx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to Rx queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_rxq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
index 83e9534cd0a7..70f3cd868058 100644
--- a/drivers/net/mlx4/mlx4_rxtx.h
+++ b/drivers/net/mlx4/mlx4_rxtx.h
@@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
uint16_t pkts_n);
uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
-uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
/* mlx4_txq.c */
diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index c448a3e9eb87..e607089e0e20 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
break;
case MLX5_MP_REQ_STOP_RXTX:
DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id);
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(&priv->mp_id, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aecdc5a68abb..bbe05bb837e0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 67eda41a60a5..5571e9067787 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_action_handle_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index f388fcc31395..11ea935d72f0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
- dev->rx_pkt_burst == removed_rx_burst) {
+ dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
rte_errno = ENOTSUP;
return -rte_errno;
}
@@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
return i;
}
-/**
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to RX queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-removed_rx_burst(void *dpdk_rxq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/*
* Vectorized Rx routines are not compiled in when required vector instructions
* are not supported on a target architecture.
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index cb5d51340db7..7e417819f7e8 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec);
void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c9c0a4fff8..3a59237b1a7a 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
dev->data->dev_started = 0;
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf2096753..8453b2701a9f 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
return 0;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-removed_tx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/**
* Update completion queue consuming index via doorbell
* and flush the completed data buffers.
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 099e72935a3a..31eb0a1ce28e 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev);
/* mlx5_tx.c */
-uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx __rte_unused);
int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index ac0af0ff7d43..7f3532426f1f 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index edf32aa70da6..c2991ab1ccaa 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkts;
}
-static uint16_t
-pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
- __rte_unused struct rte_mbuf **tx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-static uint16_t
-pfe_dummy_recv_pkts(__rte_unused void *rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static int
pfe_eth_open(struct rte_eth_dev *dev)
{
@@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/)
gemac_disable(priv->EMAC_baseaddr);
gpi_disable(priv->GPI_baseaddr);
- dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
- dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return 0;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1122a297e6b..ea6b71f09355 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
bool use_tx_offload = false;
if (is_dummy) {
- dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
- dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 7088c57b501d..85784f4a82a6 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return eng0_pkts + eng1_pkts;
}
-uint16_t
-qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-
/* this function does a fake walk through over completion queue
* to calculate number of BDs used by HW.
* At the end, it restores the state of completion queue.
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 11ed1d9b9c50..013a4a07c716 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t
qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
- struct rte_mbuf **pkts,
- uint16_t nb_pkts);
int qede_start_queues(struct rte_eth_dev *eth_dev);
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 8f0ac0adf0ae..075f97a4b37a 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1432,6 +1432,25 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev,
*dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
}
+/**
+ * @internal
+ * Dummy DPDK callback for Rx/Tx packet burst.
+ *
+ * @param queue
+ * Pointer to Rx/Tx queue
+ * @param pkts
+ * Packet array
+ * @param nb_pkts
+ * Number of packets in packet array
+ */
+static inline uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused)
+{
+ return 0;
+}
+
/**
* Allocate an unique switch domain identifier.
*
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
@ 2022-02-10 7:38 ` Loftus, Ciara
2022-02-10 8:59 ` Ferruh Yigit
2022-02-10 11:04 ` Morten Brørup
` (5 subsequent siblings)
6 siblings, 1 reply; 24+ messages in thread
From: Loftus, Ciara @ 2022-02-10 7:38 UTC (permalink / raw)
To: Yigit, Ferruh, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, Daley, John, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko
Cc: dev
> Subject: [PATCH] ethdev: introduce generic dummy packet burst function
>
> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>
> These dummy functions are very simple, introduce a common function in
> the ethdev and update drivers to use it instead of each driver having
> its own functions.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Ciara Loftus <ciara.loftus@intel.com>
> ---
[snip]
> diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
> index 3f3c4a7c7214..910b76a92c42 100644
> --- a/drivers/net/mlx4/mlx4.c
> +++ b/drivers/net/mlx4/mlx4.c
> @@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
> return 0;
> DEBUG("%p: detaching flows from all RX queues", (void *)dev);
> priv->started = 0;
> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_wmb();
> /* Disable datapath on secondary process. */
> mlx4_mp_req_stop_rxtx(dev);
> @@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
> DEBUG("%p: closing device \"%s\"",
> (void *)dev,
> ((priv->ctx != NULL) ? priv->ctx->device->name : ""));
> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_wmb();
> /* Disable datapath on secondary process. */
> mlx4_mp_req_stop_rxtx(dev);
> diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
> index 8fcfb5490ee9..1da64910aadd 100644
> --- a/drivers/net/mlx4/mlx4_mp.c
> +++ b/drivers/net/mlx4/mlx4_mp.c
> @@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg
> *mp_msg, const void *peer)
> break;
> case MLX4_MP_REQ_STOP_RXTX:
> INFO("port %u stopping datapath", dev->data->port_id);
> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_mb();
> mp_init_msg(dev, &mp_res, param->type);
> res->result = 0;
> diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
> index ed9e41fcdea9..059e432a63fc 100644
> --- a/drivers/net/mlx4/mlx4_rxtx.c
> +++ b/drivers/net/mlx4/mlx4_rxtx.c
> @@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf
> **pkts, uint16_t pkts_n)
> rxq->stats.ipackets += i;
> return i;
> }
> -
> -/**
> - * Dummy DPDK callback for Tx.
> - *
> - * This function is used to temporarily replace the real callback during
> - * unsafe control operations on the queue, or in case of error.
> - *
> - * @param dpdk_txq
> - * Generic pointer to Tx queue structure.
> - * @param[in] pkts
> - * Packets to transmit.
> - * @param pkts_n
> - * Number of packets in array.
> - *
> - * @return
> - * Number of packets successfully transmitted (<= pkts_n).
> - */
> -uint16_t
> -mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t
> pkts_n)
> -{
> - (void)dpdk_txq;
> - (void)pkts;
> - (void)pkts_n;
> - rte_mb();
The mlx4 and mlx5 PMDs lose a call to rte_mb() when switching over to the new dummy functions. Maybe the maintainer can comment on whether that's an issue or not? Other than that LGTM.
Ciara
> - return 0;
> -}
> -
> -/**
> - * Dummy DPDK callback for Rx.
> - *
> - * This function is used to temporarily replace the real callback during
> - * unsafe control operations on the queue, or in case of error.
> - *
> - * @param dpdk_rxq
> - * Generic pointer to Rx queue structure.
> - * @param[out] pkts
> - * Array to store received packets.
> - * @param pkts_n
> - * Maximum number of packets in array.
> - *
> - * @return
> - * Number of packets successfully received (<= pkts_n).
> - */
> -uint16_t
> -mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t
> pkts_n)
> -{
> - (void)dpdk_rxq;
> - (void)pkts;
> - (void)pkts_n;
> - rte_mb();
> - return 0;
> -}
> diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
> index 83e9534cd0a7..70f3cd868058 100644
> --- a/drivers/net/mlx4/mlx4_rxtx.h
> +++ b/drivers/net/mlx4/mlx4_rxtx.h
> @@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct
> rte_mbuf **pkts,
> uint16_t pkts_n);
> uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
> uint16_t pkts_n);
> -uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
> - uint16_t pkts_n);
> -uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
> - uint16_t pkts_n);
>
> /* mlx4_txq.c */
>
> diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c
> b/drivers/net/mlx5/linux/mlx5_mp_os.c
> index c448a3e9eb87..e607089e0e20 100644
> --- a/drivers/net/mlx5/linux/mlx5_mp_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
> @@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
> break;
> case MLX5_MP_REQ_STOP_RXTX:
> DRV_LOG(INFO, "port %u stopping datapath", dev->data-
> >port_id);
> - dev->rx_pkt_burst = removed_rx_burst;
> - dev->tx_pkt_burst = removed_tx_burst;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_mb();
> mp_init_msg(&priv->mp_id, &mp_res, param->type);
> res->result = 0;
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index aecdc5a68abb..bbe05bb837e0 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
> priv->mtu);
> /* Initialize burst functions to prevent crashes before link-up. */
> - eth_dev->rx_pkt_burst = removed_rx_burst;
> - eth_dev->tx_pkt_burst = removed_tx_burst;
> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> eth_dev->dev_ops = &mlx5_dev_ops;
> eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
> eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
> index 67eda41a60a5..5571e9067787 100644
> --- a/drivers/net/mlx5/mlx5.c
> +++ b/drivers/net/mlx5/mlx5.c
> @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
> mlx5_action_handle_flush(dev);
> mlx5_flow_meter_flush(dev, NULL);
> /* Prevent crashes when queues are still in use. */
> - dev->rx_pkt_burst = removed_rx_burst;
> - dev->tx_pkt_burst = removed_tx_burst;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_wmb();
> /* Disable datapath on secondary process. */
> mlx5_mp_os_req_stop_rxtx(dev);
> diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
> index f388fcc31395..11ea935d72f0 100644
> --- a/drivers/net/mlx5/mlx5_rx.c
> +++ b/drivers/net/mlx5/mlx5_rx.c
> @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
> dev = &rte_eth_devices[rxq->port_id];
>
> if (dev->rx_pkt_burst == NULL ||
> - dev->rx_pkt_burst == removed_rx_burst) {
> + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
> rte_errno = ENOTSUP;
> return -rte_errno;
> }
> @@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct
> rte_mbuf **pkts, uint16_t pkts_n)
> return i;
> }
>
> -/**
> - * Dummy DPDK callback for RX.
> - *
> - * This function is used to temporarily replace the real callback during
> - * unsafe control operations on the queue, or in case of error.
> - *
> - * @param dpdk_rxq
> - * Generic pointer to RX queue structure.
> - * @param[out] pkts
> - * Array to store received packets.
> - * @param pkts_n
> - * Maximum number of packets in array.
> - *
> - * @return
> - * Number of packets successfully received (<= pkts_n).
> - */
> -uint16_t
> -removed_rx_burst(void *dpdk_rxq __rte_unused,
> - struct rte_mbuf **pkts __rte_unused,
> - uint16_t pkts_n __rte_unused)
> -{
> - rte_mb();
> - return 0;
> -}
> -
> /*
> * Vectorized Rx routines are not compiled in when required vector
> instructions
> * are not supported on a target architecture.
> diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
> index cb5d51340db7..7e417819f7e8 100644
> --- a/drivers/net/mlx5/mlx5_rx.h
> +++ b/drivers/net/mlx5/mlx5_rx.h
> @@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct
> mlx5_rxq_data *rxq, uint8_t vec);
> void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
> uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
> uint16_t pkts_n);
> -uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
> - uint16_t pkts_n);
> int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
> uint32_t mlx5_rx_queue_count(void *rx_queue);
> void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
> diff --git a/drivers/net/mlx5/mlx5_trigger.c
> b/drivers/net/mlx5/mlx5_trigger.c
> index 74c9c0a4fff8..3a59237b1a7a 100644
> --- a/drivers/net/mlx5/mlx5_trigger.c
> +++ b/drivers/net/mlx5/mlx5_trigger.c
> @@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
>
> dev->data->dev_started = 0;
> /* Prevent crashes when queues are still in use. */
> - dev->rx_pkt_burst = removed_rx_burst;
> - dev->tx_pkt_burst = removed_tx_burst;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_wmb();
> /* Disable datapath on secondary process. */
> mlx5_mp_os_req_stop_rxtx(dev);
> diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
> index fd2cf2096753..8453b2701a9f 100644
> --- a/drivers/net/mlx5/mlx5_tx.c
> +++ b/drivers/net/mlx5/mlx5_tx.c
> @@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data
> *__rte_restrict txq,
> return 0;
> }
>
> -/**
> - * Dummy DPDK callback for TX.
> - *
> - * This function is used to temporarily replace the real callback during
> - * unsafe control operations on the queue, or in case of error.
> - *
> - * @param dpdk_txq
> - * Generic pointer to TX queue structure.
> - * @param[in] pkts
> - * Packets to transmit.
> - * @param pkts_n
> - * Number of packets in array.
> - *
> - * @return
> - * Number of packets successfully transmitted (<= pkts_n).
> - */
> -uint16_t
> -removed_tx_burst(void *dpdk_txq __rte_unused,
> - struct rte_mbuf **pkts __rte_unused,
> - uint16_t pkts_n __rte_unused)
> -{
> - rte_mb();
> - return 0;
> -}
> -
> /**
> * Update completion queue consuming index via doorbell
> * and flush the completed data buffers.
> diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
> index 099e72935a3a..31eb0a1ce28e 100644
> --- a/drivers/net/mlx5/mlx5_tx.h
> +++ b/drivers/net/mlx5/mlx5_tx.h
> @@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct
> rte_eth_dev *dev);
>
> /* mlx5_tx.c */
>
> -uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
> - uint16_t pkts_n);
> void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
> unsigned int olx __rte_unused);
> int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
> diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> b/drivers/net/mlx5/windows/mlx5_os.c
> index ac0af0ff7d43..7f3532426f1f 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
> priv->mtu);
> /* Initialize burst functions to prevent crashes before link-up. */
> - eth_dev->rx_pkt_burst = removed_rx_burst;
> - eth_dev->tx_pkt_burst = removed_tx_burst;
> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> eth_dev->dev_ops = &mlx5_dev_ops;
> eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
> eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
> index edf32aa70da6..c2991ab1ccaa 100644
> --- a/drivers/net/pfe/pfe_ethdev.c
> +++ b/drivers/net/pfe/pfe_ethdev.c
> @@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> return nb_pkts;
> }
>
> -static uint16_t
> -pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
> - __rte_unused struct rte_mbuf **tx_pkts,
> - __rte_unused uint16_t nb_pkts)
> -{
> - return 0;
> -}
> -
> -static uint16_t
> -pfe_dummy_recv_pkts(__rte_unused void *rxq,
> - __rte_unused struct rte_mbuf **rx_pkts,
> - __rte_unused uint16_t nb_pkts)
> -{
> - return 0;
> -}
> -
> static int
> pfe_eth_open(struct rte_eth_dev *dev)
> {
> @@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int
> wake*/)
> gemac_disable(priv->EMAC_baseaddr);
> gpi_disable(priv->GPI_baseaddr);
>
> - dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
> - dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>
> return 0;
> }
> diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c
> index a1122a297e6b..ea6b71f09355 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev,
> bool is_dummy)
> bool use_tx_offload = false;
>
> if (is_dummy) {
> - dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
> - dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> return;
> }
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 7088c57b501d..85784f4a82a6 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct
> rte_mbuf **tx_pkts, uint16_t nb_pkts)
> return eng0_pkts + eng1_pkts;
> }
>
> -uint16_t
> -qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
> - __rte_unused struct rte_mbuf **pkts,
> - __rte_unused uint16_t nb_pkts)
> -{
> - return 0;
> -}
> -
> -
> /* this function does a fake walk through over completion queue
> * to calculate number of BDs used by HW.
> * At the end, it restores the state of completion queue.
> diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
> index 11ed1d9b9c50..013a4a07c716 100644
> --- a/drivers/net/qede/qede_rxtx.h
> +++ b/drivers/net/qede/qede_rxtx.h
> @@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct
> rte_mbuf **rx_pkts,
> uint16_t
> qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
> uint16_t nb_pkts);
> -uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
> - struct rte_mbuf **pkts,
> - uint16_t nb_pkts);
>
> int qede_start_queues(struct rte_eth_dev *eth_dev);
>
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 8f0ac0adf0ae..075f97a4b37a 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1432,6 +1432,25 @@ rte_eth_linkstatus_get(const struct rte_eth_dev
> *dev,
> *dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
> }
>
> +/**
> + * @internal
> + * Dummy DPDK callback for Rx/Tx packet burst.
> + *
> + * @param queue
> + * Pointer to Rx/Tx queue
> + * @param pkts
> + * Packet array
> + * @param nb_pkts
> + * Number of packets in packet array
> + */
> +static inline uint16_t
> +rte_eth_pkt_burst_dummy(void *queue __rte_unused,
> + struct rte_mbuf **pkts __rte_unused,
> + uint16_t nb_pkts __rte_unused)
> +{
> + return 0;
> +}
> +
> /**
> * Allocate an unique switch domain identifier.
> *
> --
> 2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 7:38 ` Loftus, Ciara
@ 2022-02-10 8:59 ` Ferruh Yigit
0 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-10 8:59 UTC (permalink / raw)
To: Loftus, Ciara, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, Daley, John, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko
Cc: dev
On 2/10/2022 7:38 AM, Loftus, Ciara wrote:
>> Subject: [PATCH] ethdev: introduce generic dummy packet burst function
>>
>> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>>
>> These dummy functions are very simple, introduce a common function in
>> the ethdev and update drivers to use it instead of each driver having
>> its own functions.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> Cc: Ciara Loftus <ciara.loftus@intel.com>
>> ---
>
> [snip]
>
>> diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
>> index 3f3c4a7c7214..910b76a92c42 100644
>> --- a/drivers/net/mlx4/mlx4.c
>> +++ b/drivers/net/mlx4/mlx4.c
>> @@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
>> return 0;
>> DEBUG("%p: detaching flows from all RX queues", (void *)dev);
>> priv->started = 0;
>> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
>> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_wmb();
>> /* Disable datapath on secondary process. */
>> mlx4_mp_req_stop_rxtx(dev);
>> @@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
>> DEBUG("%p: closing device \"%s\"",
>> (void *)dev,
>> ((priv->ctx != NULL) ? priv->ctx->device->name : ""));
>> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
>> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_wmb();
>> /* Disable datapath on secondary process. */
>> mlx4_mp_req_stop_rxtx(dev);
>> diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
>> index 8fcfb5490ee9..1da64910aadd 100644
>> --- a/drivers/net/mlx4/mlx4_mp.c
>> +++ b/drivers/net/mlx4/mlx4_mp.c
>> @@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg
>> *mp_msg, const void *peer)
>> break;
>> case MLX4_MP_REQ_STOP_RXTX:
>> INFO("port %u stopping datapath", dev->data->port_id);
>> - dev->tx_pkt_burst = mlx4_tx_burst_removed;
>> - dev->rx_pkt_burst = mlx4_rx_burst_removed;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_mb();
>> mp_init_msg(dev, &mp_res, param->type);
>> res->result = 0;
>> diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
>> index ed9e41fcdea9..059e432a63fc 100644
>> --- a/drivers/net/mlx4/mlx4_rxtx.c
>> +++ b/drivers/net/mlx4/mlx4_rxtx.c
>> @@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf
>> **pkts, uint16_t pkts_n)
>> rxq->stats.ipackets += i;
>> return i;
>> }
>> -
>> -/**
>> - * Dummy DPDK callback for Tx.
>> - *
>> - * This function is used to temporarily replace the real callback during
>> - * unsafe control operations on the queue, or in case of error.
>> - *
>> - * @param dpdk_txq
>> - * Generic pointer to Tx queue structure.
>> - * @param[in] pkts
>> - * Packets to transmit.
>> - * @param pkts_n
>> - * Number of packets in array.
>> - *
>> - * @return
>> - * Number of packets successfully transmitted (<= pkts_n).
>> - */
>> -uint16_t
>> -mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t
>> pkts_n)
>> -{
>> - (void)dpdk_txq;
>> - (void)pkts;
>> - (void)pkts_n;
>> - rte_mb();
>
> The mlx4 and mlx5 PMDs lose a call to rte_mb() when switching over to the new dummy functions. Maybe the maintainer can comment on whether that's an issue or not? Other than that LGTM.
>
I wasn't also sure why dummy Rx/Tx needs meeory barrier.
Matan, Slava, can you please comment?
> Ciara
>
>> - return 0;
>> -}
>> -
>> -/**
>> - * Dummy DPDK callback for Rx.
>> - *
>> - * This function is used to temporarily replace the real callback during
>> - * unsafe control operations on the queue, or in case of error.
>> - *
>> - * @param dpdk_rxq
>> - * Generic pointer to Rx queue structure.
>> - * @param[out] pkts
>> - * Array to store received packets.
>> - * @param pkts_n
>> - * Maximum number of packets in array.
>> - *
>> - * @return
>> - * Number of packets successfully received (<= pkts_n).
>> - */
>> -uint16_t
>> -mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t
>> pkts_n)
>> -{
>> - (void)dpdk_rxq;
>> - (void)pkts;
>> - (void)pkts_n;
>> - rte_mb();
>> - return 0;
>> -}
>> diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
>> index 83e9534cd0a7..70f3cd868058 100644
>> --- a/drivers/net/mlx4/mlx4_rxtx.h
>> +++ b/drivers/net/mlx4/mlx4_rxtx.h
>> @@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct
>> rte_mbuf **pkts,
>> uint16_t pkts_n);
>> uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
>> uint16_t pkts_n);
>> -uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
>> - uint16_t pkts_n);
>> -uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
>> - uint16_t pkts_n);
>>
>> /* mlx4_txq.c */
>>
>> diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c
>> b/drivers/net/mlx5/linux/mlx5_mp_os.c
>> index c448a3e9eb87..e607089e0e20 100644
>> --- a/drivers/net/mlx5/linux/mlx5_mp_os.c
>> +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
>> @@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
>> break;
>> case MLX5_MP_REQ_STOP_RXTX:
>> DRV_LOG(INFO, "port %u stopping datapath", dev->data-
>>> port_id);
>> - dev->rx_pkt_burst = removed_rx_burst;
>> - dev->tx_pkt_burst = removed_tx_burst;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_mb();
>> mp_init_msg(&priv->mp_id, &mp_res, param->type);
>> res->result = 0;
>> diff --git a/drivers/net/mlx5/linux/mlx5_os.c
>> b/drivers/net/mlx5/linux/mlx5_os.c
>> index aecdc5a68abb..bbe05bb837e0 100644
>> --- a/drivers/net/mlx5/linux/mlx5_os.c
>> +++ b/drivers/net/mlx5/linux/mlx5_os.c
>> @@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>> DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
>> priv->mtu);
>> /* Initialize burst functions to prevent crashes before link-up. */
>> - eth_dev->rx_pkt_burst = removed_rx_burst;
>> - eth_dev->tx_pkt_burst = removed_tx_burst;
>> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> eth_dev->dev_ops = &mlx5_dev_ops;
>> eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
>> eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
>> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
>> index 67eda41a60a5..5571e9067787 100644
>> --- a/drivers/net/mlx5/mlx5.c
>> +++ b/drivers/net/mlx5/mlx5.c
>> @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
>> mlx5_action_handle_flush(dev);
>> mlx5_flow_meter_flush(dev, NULL);
>> /* Prevent crashes when queues are still in use. */
>> - dev->rx_pkt_burst = removed_rx_burst;
>> - dev->tx_pkt_burst = removed_tx_burst;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_wmb();
>> /* Disable datapath on secondary process. */
>> mlx5_mp_os_req_stop_rxtx(dev);
>> diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
>> index f388fcc31395..11ea935d72f0 100644
>> --- a/drivers/net/mlx5/mlx5_rx.c
>> +++ b/drivers/net/mlx5/mlx5_rx.c
>> @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
>> dev = &rte_eth_devices[rxq->port_id];
>>
>> if (dev->rx_pkt_burst == NULL ||
>> - dev->rx_pkt_burst == removed_rx_burst) {
>> + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
>> rte_errno = ENOTSUP;
>> return -rte_errno;
>> }
>> @@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct
>> rte_mbuf **pkts, uint16_t pkts_n)
>> return i;
>> }
>>
>> -/**
>> - * Dummy DPDK callback for RX.
>> - *
>> - * This function is used to temporarily replace the real callback during
>> - * unsafe control operations on the queue, or in case of error.
>> - *
>> - * @param dpdk_rxq
>> - * Generic pointer to RX queue structure.
>> - * @param[out] pkts
>> - * Array to store received packets.
>> - * @param pkts_n
>> - * Maximum number of packets in array.
>> - *
>> - * @return
>> - * Number of packets successfully received (<= pkts_n).
>> - */
>> -uint16_t
>> -removed_rx_burst(void *dpdk_rxq __rte_unused,
>> - struct rte_mbuf **pkts __rte_unused,
>> - uint16_t pkts_n __rte_unused)
>> -{
>> - rte_mb();
>> - return 0;
>> -}
>> -
>> /*
>> * Vectorized Rx routines are not compiled in when required vector
>> instructions
>> * are not supported on a target architecture.
>> diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
>> index cb5d51340db7..7e417819f7e8 100644
>> --- a/drivers/net/mlx5/mlx5_rx.h
>> +++ b/drivers/net/mlx5/mlx5_rx.h
>> @@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct
>> mlx5_rxq_data *rxq, uint8_t vec);
>> void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
>> uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
>> uint16_t pkts_n);
>> -uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
>> - uint16_t pkts_n);
>> int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
>> uint32_t mlx5_rx_queue_count(void *rx_queue);
>> void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
>> diff --git a/drivers/net/mlx5/mlx5_trigger.c
>> b/drivers/net/mlx5/mlx5_trigger.c
>> index 74c9c0a4fff8..3a59237b1a7a 100644
>> --- a/drivers/net/mlx5/mlx5_trigger.c
>> +++ b/drivers/net/mlx5/mlx5_trigger.c
>> @@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
>>
>> dev->data->dev_started = 0;
>> /* Prevent crashes when queues are still in use. */
>> - dev->rx_pkt_burst = removed_rx_burst;
>> - dev->tx_pkt_burst = removed_tx_burst;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> rte_wmb();
>> /* Disable datapath on secondary process. */
>> mlx5_mp_os_req_stop_rxtx(dev);
>> diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
>> index fd2cf2096753..8453b2701a9f 100644
>> --- a/drivers/net/mlx5/mlx5_tx.c
>> +++ b/drivers/net/mlx5/mlx5_tx.c
>> @@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data
>> *__rte_restrict txq,
>> return 0;
>> }
>>
>> -/**
>> - * Dummy DPDK callback for TX.
>> - *
>> - * This function is used to temporarily replace the real callback during
>> - * unsafe control operations on the queue, or in case of error.
>> - *
>> - * @param dpdk_txq
>> - * Generic pointer to TX queue structure.
>> - * @param[in] pkts
>> - * Packets to transmit.
>> - * @param pkts_n
>> - * Number of packets in array.
>> - *
>> - * @return
>> - * Number of packets successfully transmitted (<= pkts_n).
>> - */
>> -uint16_t
>> -removed_tx_burst(void *dpdk_txq __rte_unused,
>> - struct rte_mbuf **pkts __rte_unused,
>> - uint16_t pkts_n __rte_unused)
>> -{
>> - rte_mb();
>> - return 0;
>> -}
>> -
>> /**
>> * Update completion queue consuming index via doorbell
>> * and flush the completed data buffers.
>> diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
>> index 099e72935a3a..31eb0a1ce28e 100644
>> --- a/drivers/net/mlx5/mlx5_tx.h
>> +++ b/drivers/net/mlx5/mlx5_tx.h
>> @@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct
>> rte_eth_dev *dev);
>>
>> /* mlx5_tx.c */
>>
>> -uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
>> - uint16_t pkts_n);
>> void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
>> unsigned int olx __rte_unused);
>> int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
>> diff --git a/drivers/net/mlx5/windows/mlx5_os.c
>> b/drivers/net/mlx5/windows/mlx5_os.c
>> index ac0af0ff7d43..7f3532426f1f 100644
>> --- a/drivers/net/mlx5/windows/mlx5_os.c
>> +++ b/drivers/net/mlx5/windows/mlx5_os.c
>> @@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>> DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
>> priv->mtu);
>> /* Initialize burst functions to prevent crashes before link-up. */
>> - eth_dev->rx_pkt_burst = removed_rx_burst;
>> - eth_dev->tx_pkt_burst = removed_tx_burst;
>> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> eth_dev->dev_ops = &mlx5_dev_ops;
>> eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
>> eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
>> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
>> index edf32aa70da6..c2991ab1ccaa 100644
>> --- a/drivers/net/pfe/pfe_ethdev.c
>> +++ b/drivers/net/pfe/pfe_ethdev.c
>> @@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf
>> **tx_pkts, uint16_t nb_pkts)
>> return nb_pkts;
>> }
>>
>> -static uint16_t
>> -pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
>> - __rte_unused struct rte_mbuf **tx_pkts,
>> - __rte_unused uint16_t nb_pkts)
>> -{
>> - return 0;
>> -}
>> -
>> -static uint16_t
>> -pfe_dummy_recv_pkts(__rte_unused void *rxq,
>> - __rte_unused struct rte_mbuf **rx_pkts,
>> - __rte_unused uint16_t nb_pkts)
>> -{
>> - return 0;
>> -}
>> -
>> static int
>> pfe_eth_open(struct rte_eth_dev *dev)
>> {
>> @@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int
>> wake*/)
>> gemac_disable(priv->EMAC_baseaddr);
>> gpi_disable(priv->GPI_baseaddr);
>>
>> - dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
>> - dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>>
>> return 0;
>> }
>> diff --git a/drivers/net/qede/qede_ethdev.c
>> b/drivers/net/qede/qede_ethdev.c
>> index a1122a297e6b..ea6b71f09355 100644
>> --- a/drivers/net/qede/qede_ethdev.c
>> +++ b/drivers/net/qede/qede_ethdev.c
>> @@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev,
>> bool is_dummy)
>> bool use_tx_offload = false;
>>
>> if (is_dummy) {
>> - dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
>> - dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
>> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
>> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
>> return;
>> }
>>
>> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
>> index 7088c57b501d..85784f4a82a6 100644
>> --- a/drivers/net/qede/qede_rxtx.c
>> +++ b/drivers/net/qede/qede_rxtx.c
>> @@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct
>> rte_mbuf **tx_pkts, uint16_t nb_pkts)
>> return eng0_pkts + eng1_pkts;
>> }
>>
>> -uint16_t
>> -qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
>> - __rte_unused struct rte_mbuf **pkts,
>> - __rte_unused uint16_t nb_pkts)
>> -{
>> - return 0;
>> -}
>> -
>> -
>> /* this function does a fake walk through over completion queue
>> * to calculate number of BDs used by HW.
>> * At the end, it restores the state of completion queue.
>> diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
>> index 11ed1d9b9c50..013a4a07c716 100644
>> --- a/drivers/net/qede/qede_rxtx.h
>> +++ b/drivers/net/qede/qede_rxtx.h
>> @@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct
>> rte_mbuf **rx_pkts,
>> uint16_t
>> qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
>> uint16_t nb_pkts);
>> -uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
>> - struct rte_mbuf **pkts,
>> - uint16_t nb_pkts);
>>
>> int qede_start_queues(struct rte_eth_dev *eth_dev);
>>
>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>> index 8f0ac0adf0ae..075f97a4b37a 100644
>> --- a/lib/ethdev/ethdev_driver.h
>> +++ b/lib/ethdev/ethdev_driver.h
>> @@ -1432,6 +1432,25 @@ rte_eth_linkstatus_get(const struct rte_eth_dev
>> *dev,
>> *dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
>> }
>>
>> +/**
>> + * @internal
>> + * Dummy DPDK callback for Rx/Tx packet burst.
>> + *
>> + * @param queue
>> + * Pointer to Rx/Tx queue
>> + * @param pkts
>> + * Packet array
>> + * @param nb_pkts
>> + * Number of packets in packet array
>> + */
>> +static inline uint16_t
>> +rte_eth_pkt_burst_dummy(void *queue __rte_unused,
>> + struct rte_mbuf **pkts __rte_unused,
>> + uint16_t nb_pkts __rte_unused)
>> +{
>> + return 0;
>> +}
>> +
>> /**
>> * Allocate an unique switch domain identifier.
>> *
>> --
>> 2.34.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
2022-02-10 7:38 ` Loftus, Ciara
@ 2022-02-10 11:04 ` Morten Brørup
2022-02-10 11:39 ` Andrew Rybchenko
2022-02-10 13:58 ` Ferruh Yigit
` (4 subsequent siblings)
6 siblings, 1 reply; 24+ messages in thread
From: Morten Brørup @ 2022-02-10 11:04 UTC (permalink / raw)
To: Ferruh Yigit, Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko
Cc: dev, Ciara Loftus
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Tuesday, 8 February 2022 20.45
>
> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>
> These dummy functions are very simple, introduce a common function in
> the ethdev and update drivers to use it instead of each driver having
> its own functions.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
After briefly considering if the dummy TX should free the burst, I concluded that the current behavior is correct.
Good clean-up. :-)
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 11:04 ` Morten Brørup
@ 2022-02-10 11:39 ` Andrew Rybchenko
2022-02-10 11:47 ` Morten Brørup
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-02-10 11:39 UTC (permalink / raw)
To: Morten Brørup, Ferruh Yigit, Shepard Siegel, Ed Czeck,
John Miller, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Hemant Agrawal, Sachin Saxena,
John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon
Cc: dev, Ciara Loftus
On 2/10/22 14:04, Morten Brørup wrote:
>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>> Sent: Tuesday, 8 February 2022 20.45
>>
>> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>>
>> These dummy functions are very simple, introduce a common function in
>> the ethdev and update drivers to use it instead of each driver having
>> its own functions.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> After briefly considering if the dummy TX should free the burst, I concluded that the current behavior is correct.
Could you share your thoughts, please. I'm wondering as well.
>
> Good clean-up. :-)
>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 11:39 ` Andrew Rybchenko
@ 2022-02-10 11:47 ` Morten Brørup
2022-02-10 11:51 ` Andrew Rybchenko
0 siblings, 1 reply; 24+ messages in thread
From: Morten Brørup @ 2022-02-10 11:47 UTC (permalink / raw)
To: Andrew Rybchenko, Ferruh Yigit, Shepard Siegel, Ed Czeck,
John Miller, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Hemant Agrawal, Sachin Saxena,
John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon
Cc: dev, Ciara Loftus
> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
> Sent: Thursday, 10 February 2022 12.39
>
> On 2/10/22 14:04, Morten Brørup wrote:
> >> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> >> Sent: Tuesday, 8 February 2022 20.45
> >>
> >> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
> >>
> >> These dummy functions are very simple, introduce a common function
> in
> >> the ethdev and update drivers to use it instead of each driver
> having
> >> its own functions.
> >>
> >> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >
> > After briefly considering if the dummy TX should free the burst, I
> concluded that the current behavior is correct.
>
> Could you share your thoughts, please. I'm wondering as well.
Returning 0 means that the packets were not transmitted.
This leaves it up to the application to decide what to do: drop or retransmit.
If the dummy TX function frees the burst, it would effectively mean that the driver dropped the packets. (In that case, some drop counters should probably also be updated in the driver; but that is irrelevant now.)
Not dropping the packets could be significant during startup.
>
> >
> > Good clean-up. :-)
> >
> > Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 11:47 ` Morten Brørup
@ 2022-02-10 11:51 ` Andrew Rybchenko
2022-02-10 14:52 ` Slava Ovsiienko
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-02-10 11:51 UTC (permalink / raw)
To: Morten Brørup, Ferruh Yigit, Shepard Siegel, Ed Czeck,
John Miller, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Hemant Agrawal, Sachin Saxena,
John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon
Cc: dev, Ciara Loftus
On 2/10/22 14:47, Morten Brørup wrote:
>> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
>> Sent: Thursday, 10 February 2022 12.39
>>
>> On 2/10/22 14:04, Morten Brørup wrote:
>>>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>>>> Sent: Tuesday, 8 February 2022 20.45
>>>>
>>>> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>>>>
>>>> These dummy functions are very simple, introduce a common function
>> in
>>>> the ethdev and update drivers to use it instead of each driver
>> having
>>>> its own functions.
>>>>
>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>
>>> After briefly considering if the dummy TX should free the burst, I
>> concluded that the current behavior is correct.
>>
>> Could you share your thoughts, please. I'm wondering as well.
>
> Returning 0 means that the packets were not transmitted.
>
> This leaves it up to the application to decide what to do: drop or retransmit.
>
> If the dummy TX function frees the burst, it would effectively mean that the driver dropped the packets. (In that case, some drop counters should probably also be updated in the driver; but that is irrelevant now.)
Makes sense, thank you.
>
> Not dropping the packets could be significant during startup.
>
>>
>>>
>>> Good clean-up. :-)
>>>
>>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>>
>>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
2022-02-10 7:38 ` Loftus, Ciara
2022-02-10 11:04 ` Morten Brørup
@ 2022-02-10 13:58 ` Ferruh Yigit
2022-02-10 16:30 ` Stephen Hemminger
2022-02-11 9:49 ` [PATCH v2] " Ferruh Yigit
` (3 subsequent siblings)
6 siblings, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-10 13:58 UTC (permalink / raw)
To: Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko
Cc: dev, Ciara Loftus
On 2/8/2022 7:44 PM, Ferruh Yigit wrote:
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> index aecdc5a68abb..bbe05bb837e0 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
> priv->mtu);
> /* Initialize burst functions to prevent crashes before link-up. */
> - eth_dev->rx_pkt_burst = removed_rx_burst;
> - eth_dev->tx_pkt_burst = removed_tx_burst;
> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> eth_dev->dev_ops = &mlx5_dev_ops;
> eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
> eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
> index 67eda41a60a5..5571e9067787 100644
> --- a/drivers/net/mlx5/mlx5.c
> +++ b/drivers/net/mlx5/mlx5.c
> @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
> mlx5_action_handle_flush(dev);
> mlx5_flow_meter_flush(dev, NULL);
> /* Prevent crashes when queues are still in use. */
> - dev->rx_pkt_burst = removed_rx_burst;
> - dev->tx_pkt_burst = removed_tx_burst;
> + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> rte_wmb();
> /* Disable datapath on secondary process. */
> mlx5_mp_os_req_stop_rxtx(dev);
> diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
> index f388fcc31395..11ea935d72f0 100644
> --- a/drivers/net/mlx5/mlx5_rx.c
> +++ b/drivers/net/mlx5/mlx5_rx.c
> @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
> dev = &rte_eth_devices[rxq->port_id];
>
> if (dev->rx_pkt_burst == NULL ||
> - dev->rx_pkt_burst == removed_rx_burst) {
> + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
> rte_errno = ENOTSUP;
> return -rte_errno;
> }
Thinking twice I am not sure if above change works.
Since function is in the header file, and the .c file that assign
the 'dev->rx_pkt_burst' and the .c file that check function pointer
are different, these two different instance of same function may
have different addresses and above check may fail when it should match.
I guess solution is move the function to a .c file and export it
internally.
I was thinking to add ethdev_driver.c file, perhaps this can be
a motivation to start that file.
Thomas, Andrew, what do you think about ethdev_driver.c file?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 11:51 ` Andrew Rybchenko
@ 2022-02-10 14:52 ` Slava Ovsiienko
0 siblings, 0 replies; 24+ messages in thread
From: Slava Ovsiienko @ 2022-02-10 14:52 UTC (permalink / raw)
To: Andrew Rybchenko, Morten Brørup, Ferruh Yigit,
Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Gagandeep Singh,
Devendra Singh Rawat, NBU-Contact-Thomas Monjalon (EXTERNAL)
Cc: dev, Ciara Loftus
Hi,
For mlx4/mlx5 tx_burst we build hardware descriptors, push ones into hardware and write
doorbell register. The doorbell register can be mapped by mlx4/mlx5 into user space either
with non-cache or with write-combining (just regular attribute) memory attributes.
The write-combining mapping requires the explicit memory barrier to push the written
data to the destination hardware, it takes noticeable time and PMDs try to optimize out.
mlx4 just does not perform wmb after the doorbelling, it is always supposed it will happen
on the next call.
For mlx5 we have the very special Tx doorbelling mode (explicitly controlled via "tx_db_nc" devarg)
to skip the last wmb in tx_burst routine. The user requesting this should understand all risks
and take countermeasures in application if he/she cares about packet drops on queue/device stop.
In the worst case, if wmb is postponed (even never happens anymore) it just causes
the increased send latency for the packets in the last burst. If queue stop happens during
the "non-promoted doorbell time period" the last burst packets might be dropped (and we suppose
this is not crucial for service being terminated)
To summarize:
- mlx4 is in moderate risk of final send burst packet drop (we've never seen this in practice, just did not check though)
- mlx5 is in minor risk of final send burst packet drop (in very special mode only, and we observed latency issues in practice), can be disregarded
- barrier in dummy routine does not fully resolve this mlx specific issue (not invoked in guaranteed way on right core)
My conclusion - I would prefer to keep barrier in dummy, "just for the case", but have no strong objections about removing,
we can accept the patch being discussed.
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
With best regards,
Slava
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, February 10, 2022 13:52
> To: Morten Brørup <mb@smartsharesystems.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Shepard Siegel <shepard.siegel@atomicrules.com>;
> Ed Czeck <ed.czeck@atomicrules.com>; John Miller
> <john.miller@atomicrules.com>; Rasesh Mody <rmody@marvell.com>;
> Shahed Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> Sachin Saxena <sachin.saxena@oss.nxp.com>; John Daley
> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Min Hu
> (Connor) <humin29@huawei.com>; Yisen Zhuang
> <yisen.zhuang@huawei.com>; Lijun Ou <oulijun@huawei.com>; Matan Azrad
> <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> Gagandeep Singh <g.singh@nxp.com>; Devendra Singh Rawat
> <dsinghrawat@marvell.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>
> Cc: dev@dpdk.org; Ciara Loftus <ciara.loftus@intel.com>
> Subject: Re: [PATCH] ethdev: introduce generic dummy packet burst function
>
> On 2/10/22 14:47, Morten Brørup wrote:
> >> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
> >> Sent: Thursday, 10 February 2022 12.39
> >>
> >> On 2/10/22 14:04, Morten Brørup wrote:
> >>>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> >>>> Sent: Tuesday, 8 February 2022 20.45
> >>>>
> >>>> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
> >>>>
> >>>> These dummy functions are very simple, introduce a common function
> >> in
> >>>> the ethdev and update drivers to use it instead of each driver
> >> having
> >>>> its own functions.
> >>>>
> >>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>
> >>> After briefly considering if the dummy TX should free the burst, I
> >> concluded that the current behavior is correct.
> >>
> >> Could you share your thoughts, please. I'm wondering as well.
> >
> > Returning 0 means that the packets were not transmitted.
> >
> > This leaves it up to the application to decide what to do: drop or retransmit.
> >
> > If the dummy TX function frees the burst, it would effectively mean that the
> driver dropped the packets. (In that case, some drop counters should
> probably also be updated in the driver; but that is irrelevant now.)
>
> Makes sense, thank you.
>
> >
> > Not dropping the packets could be significant during startup.
> >
> >>
> >>>
> >>> Good clean-up. :-)
> >>>
> >>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >>>
> >>
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 13:58 ` Ferruh Yigit
@ 2022-02-10 16:30 ` Stephen Hemminger
2022-02-10 18:40 ` Thomas Monjalon
0 siblings, 1 reply; 24+ messages in thread
From: Stephen Hemminger @ 2022-02-10 16:30 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, dev, Ciara Loftus
On Thu, 10 Feb 2022 13:58:43 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> On 2/8/2022 7:44 PM, Ferruh Yigit wrote:
> > diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> > index aecdc5a68abb..bbe05bb837e0 100644
> > --- a/drivers/net/mlx5/linux/mlx5_os.c
> > +++ b/drivers/net/mlx5/linux/mlx5_os.c
> > @@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> > DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
> > priv->mtu);
> > /* Initialize burst functions to prevent crashes before link-up. */
> > - eth_dev->rx_pkt_burst = removed_rx_burst;
> > - eth_dev->tx_pkt_burst = removed_tx_burst;
> > + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> > + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> > eth_dev->dev_ops = &mlx5_dev_ops;
> > eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
> > eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
> > diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
> > index 67eda41a60a5..5571e9067787 100644
> > --- a/drivers/net/mlx5/mlx5.c
> > +++ b/drivers/net/mlx5/mlx5.c
> > @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
> > mlx5_action_handle_flush(dev);
> > mlx5_flow_meter_flush(dev, NULL);
> > /* Prevent crashes when queues are still in use. */
> > - dev->rx_pkt_burst = removed_rx_burst;
> > - dev->tx_pkt_burst = removed_tx_burst;
> > + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> > + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> > rte_wmb();
> > /* Disable datapath on secondary process. */
> > mlx5_mp_os_req_stop_rxtx(dev);
> > diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
> > index f388fcc31395..11ea935d72f0 100644
> > --- a/drivers/net/mlx5/mlx5_rx.c
> > +++ b/drivers/net/mlx5/mlx5_rx.c
> > @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
> > dev = &rte_eth_devices[rxq->port_id];
> >
> > if (dev->rx_pkt_burst == NULL ||
> > - dev->rx_pkt_burst == removed_rx_burst) {
> > + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
> > rte_errno = ENOTSUP;
> > return -rte_errno;
> > }
>
> Thinking twice I am not sure if above change works.
>
> Since function is in the header file, and the .c file that assign
> the 'dev->rx_pkt_burst' and the .c file that check function pointer
> are different, these two different instance of same function may
> have different addresses and above check may fail when it should match.
>
> I guess solution is move the function to a .c file and export it
> internally.
> I was thinking to add ethdev_driver.c file, perhaps this can be
> a motivation to start that file.
> Thomas, Andrew, what do you think about ethdev_driver.c file?
Right putting it the header file ends up with multiple copies of same
code compiled into each driver.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] ethdev: introduce generic dummy packet burst function
2022-02-10 16:30 ` Stephen Hemminger
@ 2022-02-10 18:40 ` Thomas Monjalon
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2022-02-10 18:40 UTC (permalink / raw)
To: Ferruh Yigit, Stephen Hemminger
Cc: Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Andrew Rybchenko, dev,
Ciara Loftus
10/02/2022 17:30, Stephen Hemminger:
> On Thu, 10 Feb 2022 13:58:43 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> > On 2/8/2022 7:44 PM, Ferruh Yigit wrote:
> > > --- a/drivers/net/mlx5/mlx5.c
> > > +++ b/drivers/net/mlx5/mlx5.c
> > > @@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
> > > mlx5_action_handle_flush(dev);
> > > mlx5_flow_meter_flush(dev, NULL);
> > > /* Prevent crashes when queues are still in use. */
> > > - dev->rx_pkt_burst = removed_rx_burst;
> > > - dev->tx_pkt_burst = removed_tx_burst;
> > > + dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> > > + dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> > > rte_wmb();
> > > /* Disable datapath on secondary process. */
> > > mlx5_mp_os_req_stop_rxtx(dev);
> > > diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
> > > index f388fcc31395..11ea935d72f0 100644
> > > --- a/drivers/net/mlx5/mlx5_rx.c
> > > +++ b/drivers/net/mlx5/mlx5_rx.c
> > > @@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
> > > dev = &rte_eth_devices[rxq->port_id];
> > >
> > > if (dev->rx_pkt_burst == NULL ||
> > > - dev->rx_pkt_burst == removed_rx_burst) {
> > > + dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
> > > rte_errno = ENOTSUP;
> > > return -rte_errno;
> > > }
> >
> > Thinking twice I am not sure if above change works.
> >
> > Since function is in the header file, and the .c file that assign
> > the 'dev->rx_pkt_burst' and the .c file that check function pointer
> > are different, these two different instance of same function may
> > have different addresses and above check may fail when it should match.
> >
> > I guess solution is move the function to a .c file and export it
> > internally.
> > I was thinking to add ethdev_driver.c file, perhaps this can be
> > a motivation to start that file.
> > Thomas, Andrew, what do you think about ethdev_driver.c file?
>
> Right putting it the header file ends up with multiple copies of same
> code compiled into each driver.
I'm OK with introducing such .c file.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
` (2 preceding siblings ...)
2022-02-10 13:58 ` Ferruh Yigit
@ 2022-02-11 9:49 ` Ferruh Yigit
2022-02-11 17:14 ` [PATCH v3 1/2] " Ferruh Yigit
` (2 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 9:49 UTC (permalink / raw)
To: Shepard Siegel, Ed Czeck, John Miller, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Hemant Agrawal,
Sachin Saxena, John Daley, Hyong Youb Kim, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, Ray Kinsella
Cc: dev, Ferruh Yigit, Morten Brørup, Ciara Loftus
Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
These dummy functions are very simple, introduce a common function in
the ethdev and update drivers to use it instead of each driver having
its own functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
Cc: Ciara Loftus <ciara.loftus@intel.com>
v2:
* Convert inline function to actual function in new ethdev_driver.c
file. This is because of functions pointer comparisons in PMDs.
PMD interface of ethdev can be moved to 'ethdev_driver.c' later.
---
drivers/net/ark/ark_ethdev.c | 8 ++---
drivers/net/ark/ark_ethdev_rx.c | 9 -----
drivers/net/ark/ark_ethdev_rx.h | 2 --
drivers/net/ark/ark_ethdev_tx.c | 9 -----
drivers/net/ark/ark_ethdev_tx.h | 3 --
drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++-----
drivers/net/bnxt/bnxt.h | 4 ---
drivers/net/bnxt/bnxt_cpr.c | 4 +--
drivers/net/bnxt/bnxt_rxr.c | 14 --------
drivers/net/bnxt/bnxt_txr.c | 14 --------
drivers/net/cnxk/cnxk_ethdev.c | 14 ++------
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 25 --------------
drivers/net/enic/enic.h | 3 --
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_rxtx.c | 11 ------
drivers/net/hns3/hns3_rxtx.c | 18 +++-------
drivers/net/hns3/hns3_rxtx.h | 3 --
drivers/net/mlx4/mlx4.c | 8 ++---
drivers/net/mlx4/mlx4_mp.c | 4 +--
drivers/net/mlx4/mlx4_rxtx.c | 52 -----------------------------
drivers/net/mlx4/mlx4_rxtx.h | 4 ---
drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +--
drivers/net/mlx5/linux/mlx5_os.c | 4 +--
drivers/net/mlx5/mlx5.c | 4 +--
drivers/net/mlx5/mlx5_rx.c | 27 +--------------
drivers/net/mlx5/mlx5_rx.h | 2 --
drivers/net/mlx5/mlx5_trigger.c | 4 +--
drivers/net/mlx5/mlx5_tx.c | 25 --------------
drivers/net/mlx5/mlx5_tx.h | 2 --
drivers/net/mlx5/windows/mlx5_os.c | 4 +--
drivers/net/pfe/pfe_ethdev.c | 20 ++---------
drivers/net/qede/qede_ethdev.c | 4 +--
drivers/net/qede/qede_rxtx.c | 9 -----
drivers/net/qede/qede_rxtx.h | 3 --
lib/ethdev/ethdev_driver.c | 13 ++++++++
lib/ethdev/ethdev_driver.h | 17 ++++++++++
lib/ethdev/meson.build | 1 +
lib/ethdev/version.map | 1 +
41 files changed, 71 insertions(+), 301 deletions(-)
create mode 100644 lib/ethdev/ethdev_driver.c
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3f023..230a1272e986 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Use dummy function until setup */
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
@@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
if (ark->start_pg)
ark_pktgen_pause(ark->pg);
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* STOP TX Side */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index 98658ce621e2..37a88cbedee4 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_recv_pkts_noop(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 859fcf1e6f71..f64b3dd137b3 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
-uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void eth_ark_dev_rx_queue_release(void *rx_queue);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 676e4115d3bf..abdce6a8cc0d 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue,
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_xmit_pkts_noop(void *vtxq __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 12c71a7158a9..7134dbfeed81 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -10,9 +10,6 @@
#include <ethdev_driver.h>
-uint16_t eth_ark_xmit_pkts_noop(void *vtxq,
- struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_xmit_pkts(void *vtxq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 66b0512c8695..cb5733c5972b 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
-static uint16_t
-bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy;
- dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
}
void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0cbb58b2cf3e..44724a9dfe91 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
int wait_to_complete);
-uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
-uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
extern const struct rte_flow_ops bnxt_flow_ops;
diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 9b9285b79903..99af0f9e87ee 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp)
void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
{
- eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b60c2470f39e..5a9cf48e6739 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx_pkts;
}
-/*
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_recv_pkts(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
void bnxt_free_rx_rings(struct bnxt *bp)
{
int i;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 3b8f2382f92e..7a7196a23731 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx_pkts;
}
-/*
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct bnxt *bp = dev->data->dev_private;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 53dfb5eae80e..c6a9ada05bb4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
return rc;
}
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
static void
nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
@@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
* which caused app crash since rx/tx burst is still
* on different lcores
*/
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 379daec5f4e8..5be4fef8fe68 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
}
/*changing tx burst function to avoid any more enqueues */
- dev->tx_pkt_burst = dummy_dev_tx;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* Loop while dpni_disable() attempts to drain the egress FQs
* and confirm them back to us.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 1b49f43103a7..e79a7fc2e286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -264,7 +264,6 @@ __rte_internal
uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
struct rte_mbuf **bufs, uint16_t nb_pkts);
-uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 81b28e20cb47..b8844fbdf107 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
-{
- (void)queue;
- (void)bufs;
- (void)nb_pkts;
- return 0;
-}
-
#if defined(RTE_TOOLCHAIN_GCC)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d5493c98345d..163a1f037e26 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t enic_dummy_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 163be09809b1..a8d470e8ac93 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev)
RTE_PTYPE_UNKNOWN
};
- if (dev->rx_pkt_burst != enic_dummy_recv_pkts &&
+ if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy &&
dev->rx_pkt_burst != NULL) {
struct enic *enic = pmd_priv(dev);
if (enic->overlay_offload)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 97d97ea793f2..9f351de72eb4 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
/* replace Rx function with a no-op to avoid getting stale pkts */
- eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 74a90694c718..7a66d72275d9 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -31,17 +31,6 @@
#define rte_packet_prefetch(p) do {} while (0)
#endif
-/* dummy receive function to replace actual function in
- * order to do safe reconfiguration operations.
- */
-uint16_t
-enic_dummy_recv_pkts(__rte_unused void *rx_queue,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static inline uint16_t
enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts, const bool use_64b_desc)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3b72c2375a60..8dc6cfac704d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
return hns3_xmit_pkts;
}
-uint16_t
-hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- return 0;
-}
-
static void
hns3_trace_rxtx_function(struct rte_eth_dev *dev)
{
@@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev);
eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status;
eth_dev->tx_pkt_burst = hw->set_link_down ?
- hns3_dummy_rxtx_burst :
+ rte_eth_pkt_burst_dummy :
hns3_get_tx_function(eth_dev, &prep);
eth_dev->tx_pkt_prepare = prep;
eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status;
hns3_trace_rxtx_function(eth_dev);
} else {
- eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
- eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_prepare = NULL;
}
@@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts)
return hns3_tx_done_cleanup_full(q, free_cnt);
- else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst)
+ else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy)
return 0;
else
return -ENOTSUP;
@@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw)
void
hns3_stop_tx_datapath(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 094b65b7de70..a000318357ab 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev);
void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev,
eth_tx_prep_t *prep);
-uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id);
void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id,
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 3f3c4a7c7214..910b76a92c42 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
return 0;
DEBUG("%p: detaching flows from all RX queues", (void *)dev);
priv->started = 0;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
@@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
DEBUG("%p: closing device \"%s\"",
(void *)dev,
((priv->ctx != NULL) ? priv->ctx->device->name : ""));
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
index 8fcfb5490ee9..1da64910aadd 100644
--- a/drivers/net/mlx4/mlx4_mp.c
+++ b/drivers/net/mlx4/mlx4_mp.c
@@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
break;
case MLX4_MP_REQ_STOP_RXTX:
INFO("port %u stopping datapath", dev->data->port_id);
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(dev, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ed9e41fcdea9..059e432a63fc 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq->stats.ipackets += i;
return i;
}
-
-/**
- * Dummy DPDK callback for Tx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to Tx queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_txq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
-
-/**
- * Dummy DPDK callback for Rx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to Rx queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_rxq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
index 83e9534cd0a7..70f3cd868058 100644
--- a/drivers/net/mlx4/mlx4_rxtx.h
+++ b/drivers/net/mlx4/mlx4_rxtx.h
@@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
uint16_t pkts_n);
uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
-uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
/* mlx4_txq.c */
diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index c448a3e9eb87..e607089e0e20 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
break;
case MLX5_MP_REQ_STOP_RXTX:
DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id);
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(&priv->mp_id, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aecdc5a68abb..bbe05bb837e0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 67eda41a60a5..5571e9067787 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_action_handle_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index f388fcc31395..11ea935d72f0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
- dev->rx_pkt_burst == removed_rx_burst) {
+ dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
rte_errno = ENOTSUP;
return -rte_errno;
}
@@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
return i;
}
-/**
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to RX queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-removed_rx_burst(void *dpdk_rxq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/*
* Vectorized Rx routines are not compiled in when required vector instructions
* are not supported on a target architecture.
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index cb5d51340db7..7e417819f7e8 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec);
void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c9c0a4fff8..3a59237b1a7a 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
dev->data->dev_started = 0;
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf2096753..8453b2701a9f 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
return 0;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-removed_tx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/**
* Update completion queue consuming index via doorbell
* and flush the completed data buffers.
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 398cadfeaa46..c4b8271f6fb3 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev);
/* mlx5_tx.c */
-uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx __rte_unused);
int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index ac0af0ff7d43..7f3532426f1f 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index edf32aa70da6..c2991ab1ccaa 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkts;
}
-static uint16_t
-pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
- __rte_unused struct rte_mbuf **tx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-static uint16_t
-pfe_dummy_recv_pkts(__rte_unused void *rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static int
pfe_eth_open(struct rte_eth_dev *dev)
{
@@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/)
gemac_disable(priv->EMAC_baseaddr);
gpi_disable(priv->GPI_baseaddr);
- dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
- dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return 0;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1122a297e6b..ea6b71f09355 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
bool use_tx_offload = false;
if (is_dummy) {
- dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
- dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 7088c57b501d..85784f4a82a6 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return eng0_pkts + eng1_pkts;
}
-uint16_t
-qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-
/* this function does a fake walk through over completion queue
* to calculate number of BDs used by HW.
* At the end, it restores the state of completion queue.
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 11ed1d9b9c50..013a4a07c716 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t
qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
- struct rte_mbuf **pkts,
- uint16_t nb_pkts);
int qede_start_queues(struct rte_eth_dev *eth_dev);
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
new file mode 100644
index 000000000000..fb7323f4d327
--- /dev/null
+++ b/lib/ethdev/ethdev_driver.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include "ethdev_driver.h"
+
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused)
+{
+ return 0;
+}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 76a3975c1bb1..c58937baad9b 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1487,6 +1487,23 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev,
*dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
}
+/**
+ * @internal
+ * Dummy DPDK callback for Rx/Tx packet burst.
+ *
+ * @param queue
+ * Pointer to Rx/Tx queue
+ * @param pkts
+ * Packet array
+ * @param nb_pkts
+ * Number of packets in packet array
+ */
+__rte_internal
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused);
+
/**
* Allocate an unique switch domain identifier.
*
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 0205c853df53..a094585bf715 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -2,6 +2,7 @@
# Copyright(c) 2017 Intel Corporation
sources = files(
+ 'ethdev_driver.c',
'ethdev_private.c',
'ethdev_profile.c',
'ethdev_trace_points.c',
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 1ca7ec33ee45..e7dc821bb3f0 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -288,6 +288,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
rte_eth_ip_reassembly_dynfield_register;
+ rte_eth_pkt_burst_dummy;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 1/2] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
` (3 preceding siblings ...)
2022-02-11 9:49 ` [PATCH v2] " Ferruh Yigit
@ 2022-02-11 17:14 ` Ferruh Yigit
2022-02-11 17:14 ` [PATCH v3 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 18:03 ` [PATCH v3 1/2] ethdev: introduce generic dummy packet burst function Thomas Monjalon
2022-02-11 18:38 ` [PATCH v4 " Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
6 siblings, 2 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 17:14 UTC (permalink / raw)
To: Ciara Loftus, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, John Daley, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, Ray Kinsella
Cc: dev, Ferruh Yigit, Morten Brørup
Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
These dummy functions are very simple, introduce a common function in
the ethdev and update drivers to use it instead of each driver having
its own functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
Cc: Ciara Loftus <ciara.loftus@intel.com>
v2:
* Convert inline function to actual function in new ethdev_driver.c
file. This is because of functions pointer comparisons in PMDs.
PMD interface of ethdev can be moved to 'ethdev_driver.c' later.
v3:
* updated af_xdp too
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 26 ++-------------
drivers/net/ark/ark_ethdev.c | 8 ++---
drivers/net/ark/ark_ethdev_rx.c | 9 -----
drivers/net/ark/ark_ethdev_rx.h | 2 --
drivers/net/ark/ark_ethdev_tx.c | 9 -----
drivers/net/ark/ark_ethdev_tx.h | 3 --
drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++-----
drivers/net/bnxt/bnxt.h | 4 ---
drivers/net/bnxt/bnxt_cpr.c | 4 +--
drivers/net/bnxt/bnxt_rxr.c | 14 --------
drivers/net/bnxt/bnxt_txr.c | 14 --------
drivers/net/cnxk/cnxk_ethdev.c | 14 ++------
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 25 --------------
drivers/net/enic/enic.h | 3 --
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_rxtx.c | 11 ------
drivers/net/hns3/hns3_rxtx.c | 18 +++-------
drivers/net/hns3/hns3_rxtx.h | 3 --
drivers/net/mlx4/mlx4.c | 8 ++---
drivers/net/mlx4/mlx4_mp.c | 4 +--
drivers/net/mlx4/mlx4_rxtx.c | 52 -----------------------------
drivers/net/mlx4/mlx4_rxtx.h | 4 ---
drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +--
drivers/net/mlx5/linux/mlx5_os.c | 4 +--
drivers/net/mlx5/mlx5.c | 4 +--
drivers/net/mlx5/mlx5_rx.c | 27 +--------------
drivers/net/mlx5/mlx5_rx.h | 2 --
drivers/net/mlx5/mlx5_trigger.c | 4 +--
drivers/net/mlx5/mlx5_tx.c | 25 --------------
drivers/net/mlx5/mlx5_tx.h | 2 --
drivers/net/mlx5/windows/mlx5_os.c | 4 +--
drivers/net/pfe/pfe_ethdev.c | 20 ++---------
drivers/net/qede/qede_ethdev.c | 4 +--
drivers/net/qede/qede_rxtx.c | 9 -----
drivers/net/qede/qede_rxtx.h | 3 --
lib/ethdev/ethdev_driver.c | 13 ++++++++
lib/ethdev/ethdev_driver.h | 17 ++++++++++
lib/ethdev/meson.build | 1 +
lib/ethdev/version.map | 1 +
42 files changed, 73 insertions(+), 325 deletions(-)
create mode 100644 lib/ethdev/ethdev_driver.c
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 4a37c11960e1..6ac710c6bdc6 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -1916,28 +1916,6 @@ afxdp_mp_send_fds(const struct rte_mp_msg *request, const void *peer)
return 0;
}
-/* Secondary process rx function. RX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_rx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
-/* Secondary process tx function. TX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_tx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
static int
rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
{
@@ -1961,8 +1939,8 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
}
eth_dev->dev_ops = &ops;
eth_dev->device = &dev->device;
- eth_dev->rx_pkt_burst = eth_af_xdp_rx_noop;
- eth_dev->tx_pkt_burst = eth_af_xdp_tx_noop;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->process_private = (struct pmd_process_private *)
rte_zmalloc_socket(name,
sizeof(struct pmd_process_private),
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3f023..230a1272e986 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Use dummy function until setup */
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
@@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
if (ark->start_pg)
ark_pktgen_pause(ark->pg);
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* STOP TX Side */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index 98658ce621e2..37a88cbedee4 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_recv_pkts_noop(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 859fcf1e6f71..f64b3dd137b3 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
-uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void eth_ark_dev_rx_queue_release(void *rx_queue);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 676e4115d3bf..abdce6a8cc0d 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue,
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_xmit_pkts_noop(void *vtxq __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 12c71a7158a9..7134dbfeed81 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -10,9 +10,6 @@
#include <ethdev_driver.h>
-uint16_t eth_ark_xmit_pkts_noop(void *vtxq,
- struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_xmit_pkts(void *vtxq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 66b0512c8695..cb5733c5972b 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
-static uint16_t
-bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy;
- dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
}
void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0cbb58b2cf3e..44724a9dfe91 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
int wait_to_complete);
-uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
-uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
extern const struct rte_flow_ops bnxt_flow_ops;
diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 9b9285b79903..99af0f9e87ee 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp)
void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
{
- eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b60c2470f39e..5a9cf48e6739 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx_pkts;
}
-/*
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_recv_pkts(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
void bnxt_free_rx_rings(struct bnxt *bp)
{
int i;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 3b8f2382f92e..7a7196a23731 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx_pkts;
}
-/*
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct bnxt *bp = dev->data->dev_private;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 53dfb5eae80e..c6a9ada05bb4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
return rc;
}
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
static void
nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
@@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
* which caused app crash since rx/tx burst is still
* on different lcores
*/
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 379daec5f4e8..5be4fef8fe68 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
}
/*changing tx burst function to avoid any more enqueues */
- dev->tx_pkt_burst = dummy_dev_tx;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* Loop while dpni_disable() attempts to drain the egress FQs
* and confirm them back to us.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 1b49f43103a7..e79a7fc2e286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -264,7 +264,6 @@ __rte_internal
uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
struct rte_mbuf **bufs, uint16_t nb_pkts);
-uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 81b28e20cb47..b8844fbdf107 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
-{
- (void)queue;
- (void)bufs;
- (void)nb_pkts;
- return 0;
-}
-
#if defined(RTE_TOOLCHAIN_GCC)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d5493c98345d..163a1f037e26 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t enic_dummy_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 163be09809b1..a8d470e8ac93 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev)
RTE_PTYPE_UNKNOWN
};
- if (dev->rx_pkt_burst != enic_dummy_recv_pkts &&
+ if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy &&
dev->rx_pkt_burst != NULL) {
struct enic *enic = pmd_priv(dev);
if (enic->overlay_offload)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 97d97ea793f2..9f351de72eb4 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
/* replace Rx function with a no-op to avoid getting stale pkts */
- eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 74a90694c718..7a66d72275d9 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -31,17 +31,6 @@
#define rte_packet_prefetch(p) do {} while (0)
#endif
-/* dummy receive function to replace actual function in
- * order to do safe reconfiguration operations.
- */
-uint16_t
-enic_dummy_recv_pkts(__rte_unused void *rx_queue,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static inline uint16_t
enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts, const bool use_64b_desc)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3b72c2375a60..8dc6cfac704d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
return hns3_xmit_pkts;
}
-uint16_t
-hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- return 0;
-}
-
static void
hns3_trace_rxtx_function(struct rte_eth_dev *dev)
{
@@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev);
eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status;
eth_dev->tx_pkt_burst = hw->set_link_down ?
- hns3_dummy_rxtx_burst :
+ rte_eth_pkt_burst_dummy :
hns3_get_tx_function(eth_dev, &prep);
eth_dev->tx_pkt_prepare = prep;
eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status;
hns3_trace_rxtx_function(eth_dev);
} else {
- eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
- eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_prepare = NULL;
}
@@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts)
return hns3_tx_done_cleanup_full(q, free_cnt);
- else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst)
+ else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy)
return 0;
else
return -ENOTSUP;
@@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw)
void
hns3_stop_tx_datapath(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 094b65b7de70..a000318357ab 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev);
void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev,
eth_tx_prep_t *prep);
-uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id);
void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id,
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 3f3c4a7c7214..910b76a92c42 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
return 0;
DEBUG("%p: detaching flows from all RX queues", (void *)dev);
priv->started = 0;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
@@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
DEBUG("%p: closing device \"%s\"",
(void *)dev,
((priv->ctx != NULL) ? priv->ctx->device->name : ""));
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
index 8fcfb5490ee9..1da64910aadd 100644
--- a/drivers/net/mlx4/mlx4_mp.c
+++ b/drivers/net/mlx4/mlx4_mp.c
@@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
break;
case MLX4_MP_REQ_STOP_RXTX:
INFO("port %u stopping datapath", dev->data->port_id);
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(dev, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ed9e41fcdea9..059e432a63fc 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq->stats.ipackets += i;
return i;
}
-
-/**
- * Dummy DPDK callback for Tx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to Tx queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_txq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
-
-/**
- * Dummy DPDK callback for Rx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to Rx queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_rxq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
index 83e9534cd0a7..70f3cd868058 100644
--- a/drivers/net/mlx4/mlx4_rxtx.h
+++ b/drivers/net/mlx4/mlx4_rxtx.h
@@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
uint16_t pkts_n);
uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
-uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
/* mlx4_txq.c */
diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index c448a3e9eb87..e607089e0e20 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
break;
case MLX5_MP_REQ_STOP_RXTX:
DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id);
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(&priv->mp_id, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aecdc5a68abb..bbe05bb837e0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 67eda41a60a5..5571e9067787 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_action_handle_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index f388fcc31395..11ea935d72f0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
- dev->rx_pkt_burst == removed_rx_burst) {
+ dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
rte_errno = ENOTSUP;
return -rte_errno;
}
@@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
return i;
}
-/**
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to RX queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-removed_rx_burst(void *dpdk_rxq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/*
* Vectorized Rx routines are not compiled in when required vector instructions
* are not supported on a target architecture.
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index cb5d51340db7..7e417819f7e8 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec);
void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c9c0a4fff8..3a59237b1a7a 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
dev->data->dev_started = 0;
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf2096753..8453b2701a9f 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
return 0;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-removed_tx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/**
* Update completion queue consuming index via doorbell
* and flush the completed data buffers.
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 398cadfeaa46..c4b8271f6fb3 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev);
/* mlx5_tx.c */
-uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx __rte_unused);
int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index ac0af0ff7d43..7f3532426f1f 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index edf32aa70da6..c2991ab1ccaa 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkts;
}
-static uint16_t
-pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
- __rte_unused struct rte_mbuf **tx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-static uint16_t
-pfe_dummy_recv_pkts(__rte_unused void *rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static int
pfe_eth_open(struct rte_eth_dev *dev)
{
@@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/)
gemac_disable(priv->EMAC_baseaddr);
gpi_disable(priv->GPI_baseaddr);
- dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
- dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return 0;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1122a297e6b..ea6b71f09355 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
bool use_tx_offload = false;
if (is_dummy) {
- dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
- dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 7088c57b501d..85784f4a82a6 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return eng0_pkts + eng1_pkts;
}
-uint16_t
-qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-
/* this function does a fake walk through over completion queue
* to calculate number of BDs used by HW.
* At the end, it restores the state of completion queue.
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 11ed1d9b9c50..013a4a07c716 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t
qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
- struct rte_mbuf **pkts,
- uint16_t nb_pkts);
int qede_start_queues(struct rte_eth_dev *eth_dev);
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
new file mode 100644
index 000000000000..fb7323f4d327
--- /dev/null
+++ b/lib/ethdev/ethdev_driver.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include "ethdev_driver.h"
+
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused)
+{
+ return 0;
+}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 617b450d5763..8de8e1c67113 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1509,6 +1509,23 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev,
*dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
}
+/**
+ * @internal
+ * Dummy DPDK callback for Rx/Tx packet burst.
+ *
+ * @param queue
+ * Pointer to Rx/Tx queue
+ * @param pkts
+ * Packet array
+ * @param nb_pkts
+ * Number of packets in packet array
+ */
+__rte_internal
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused);
+
/**
* Allocate an unique switch domain identifier.
*
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 0205c853df53..a094585bf715 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -2,6 +2,7 @@
# Copyright(c) 2017 Intel Corporation
sources = files(
+ 'ethdev_driver.c',
'ethdev_private.c',
'ethdev_profile.c',
'ethdev_trace_points.c',
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 1a43282ce45d..d5cc56a56023 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -289,6 +289,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
rte_eth_ip_reassembly_dynfield_register;
+ rte_eth_pkt_burst_dummy;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 2/2] ethdev: move driver interface functions to its own file
2022-02-11 17:14 ` [PATCH v3 1/2] " Ferruh Yigit
@ 2022-02-11 17:14 ` Ferruh Yigit
2022-02-11 18:09 ` Thomas Monjalon
2022-02-11 18:03 ` [PATCH v3 1/2] ethdev: introduce generic dummy packet burst function Thomas Monjalon
1 sibling, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 17:14 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko, Anatoly Burakov; +Cc: dev, Ferruh Yigit
Relevant functions moved to ethdev_driver.c.
No functional change.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.c | 131 ++++++
lib/ethdev/ethdev_private.h | 36 ++
lib/ethdev/rte_ethdev.c | 901 ------------------------------------
4 files changed, 925 insertions(+), 901 deletions(-)
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index fb7323f4d327..e0ea30be5fe9 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -2,7 +2,633 @@
* Copyright(c) 2022 Intel Corporation
*/
+#include <rte_kvargs.h>
+#include <rte_malloc.h>
+
#include "ethdev_driver.h"
+#include "ethdev_private.h"
+
+/**
+ * A set of values to describe the possible states of a switch domain.
+ */
+enum rte_eth_switch_domain_state {
+ RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED
+};
+
+/**
+ * Array of switch domains available for allocation. Array is sized to
+ * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
+ * ethdev ports in a single process.
+ */
+static struct rte_eth_dev_switch {
+ enum rte_eth_switch_domain_state state;
+} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_dev *
+eth_dev_allocated(const char *name)
+{
+ uint16_t i;
+
+ RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (rte_eth_devices[i].data != NULL &&
+ strcmp(rte_eth_devices[i].data->name, name) == 0)
+ return &rte_eth_devices[i];
+ }
+ return NULL;
+}
+
+static uint16_t
+eth_dev_find_free_port(void)
+{
+ uint16_t i;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ /* Using shared name field to find a free port. */
+ if (eth_dev_shared_data->data[i].name[0] == '\0') {
+ RTE_ASSERT(rte_eth_devices[i].state ==
+ RTE_ETH_DEV_UNUSED);
+ return i;
+ }
+ }
+ return RTE_MAX_ETHPORTS;
+}
+
+static struct rte_eth_dev *
+eth_dev_get(uint16_t port_id)
+{
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
+
+ eth_dev->data = ð_dev_shared_data->data[port_id];
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocate(const char *name)
+{
+ uint16_t port_id;
+ struct rte_eth_dev *eth_dev = NULL;
+ size_t name_len;
+
+ name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
+ if (name_len == 0) {
+ RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
+ return NULL;
+ }
+
+ if (name_len >= RTE_ETH_NAME_MAX_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
+ return NULL;
+ }
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port creation between primary and secondary threads. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ if (eth_dev_allocated(name) != NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethernet device with name %s already allocated\n",
+ name);
+ goto unlock;
+ }
+
+ port_id = eth_dev_find_free_port();
+ if (port_id == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Reached maximum number of Ethernet ports\n");
+ goto unlock;
+ }
+
+ eth_dev = eth_dev_get(port_id);
+ strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
+ eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
+ eth_dev->data->mtu = RTE_ETHER_MTU;
+ pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
+
+unlock:
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocated(const char *name)
+{
+ struct rte_eth_dev *ethdev;
+
+ eth_dev_shared_data_prepare();
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ ethdev = eth_dev_allocated(name);
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return ethdev;
+}
+
+/*
+ * Attach to a port already registered by the primary process, which
+ * makes sure that the same device would have the same port ID both
+ * in the primary and secondary process.
+ */
+struct rte_eth_dev *
+rte_eth_dev_attach_secondary(const char *name)
+{
+ uint16_t i;
+ struct rte_eth_dev *eth_dev = NULL;
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port attachment to primary port creation and release. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
+ break;
+ }
+ if (i == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device %s is not driven by the primary process\n",
+ name);
+ } else {
+ eth_dev = eth_dev_get(i);
+ RTE_ASSERT(eth_dev->data->port_id == i);
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+ return eth_dev;
+}
+
+int
+rte_eth_dev_callback_process(struct rte_eth_dev *dev,
+ enum rte_eth_event_type event, void *ret_param)
+{
+ struct rte_eth_dev_callback *cb_lst;
+ struct rte_eth_dev_callback dev_cb;
+ int rc = 0;
+
+ rte_spinlock_lock(ð_dev_cb_lock);
+ TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+ if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+ continue;
+ dev_cb = *cb_lst;
+ cb_lst->active = 1;
+ if (ret_param != NULL)
+ dev_cb.ret_param = ret_param;
+
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
+ dev_cb.cb_arg, dev_cb.ret_param);
+ rte_spinlock_lock(ð_dev_cb_lock);
+ cb_lst->active = 0;
+ }
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ return rc;
+}
+
+void
+rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
+{
+ if (dev == NULL)
+ return;
+
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for fast-path devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
+
+ dev->state = RTE_ETH_DEV_ATTACHED;
+}
+
+int
+rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
+{
+ if (eth_dev == NULL)
+ return -EINVAL;
+
+ eth_dev_shared_data_prepare();
+
+ if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_DESTROY, NULL);
+
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ eth_dev->state = RTE_ETH_DEV_UNUSED;
+ eth_dev->device = NULL;
+ eth_dev->process_private = NULL;
+ eth_dev->intr_handle = NULL;
+ eth_dev->rx_pkt_burst = NULL;
+ eth_dev->tx_pkt_burst = NULL;
+ eth_dev->tx_pkt_prepare = NULL;
+ eth_dev->rx_queue_count = NULL;
+ eth_dev->rx_descriptor_status = NULL;
+ eth_dev->tx_descriptor_status = NULL;
+ eth_dev->dev_ops = NULL;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ rte_free(eth_dev->data->rx_queues);
+ rte_free(eth_dev->data->tx_queues);
+ rte_free(eth_dev->data->mac_addrs);
+ rte_free(eth_dev->data->hash_mac_addrs);
+ rte_free(eth_dev->data->dev_private);
+ pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
+ memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return 0;
+}
+
+int
+rte_eth_dev_create(struct rte_device *device, const char *name,
+ size_t priv_data_size,
+ ethdev_bus_specific_init ethdev_bus_specific_init,
+ void *bus_init_params,
+ ethdev_init_t ethdev_init, void *init_params)
+{
+ struct rte_eth_dev *ethdev;
+ int retval;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ethdev = rte_eth_dev_allocate(name);
+ if (!ethdev)
+ return -ENODEV;
+
+ if (priv_data_size) {
+ ethdev->data->dev_private = rte_zmalloc_socket(
+ name, priv_data_size, RTE_CACHE_LINE_SIZE,
+ device->numa_node);
+
+ if (!ethdev->data->dev_private) {
+ RTE_ETHDEV_LOG(ERR,
+ "failed to allocate private data\n");
+ retval = -ENOMEM;
+ goto probe_failed;
+ }
+ }
+ } else {
+ ethdev = rte_eth_dev_attach_secondary(name);
+ if (!ethdev) {
+ RTE_ETHDEV_LOG(ERR,
+ "secondary process attach failed, ethdev doesn't exist\n");
+ return -ENODEV;
+ }
+ }
+
+ ethdev->device = device;
+
+ if (ethdev_bus_specific_init) {
+ retval = ethdev_bus_specific_init(ethdev, bus_init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR,
+ "ethdev bus specific initialisation failed\n");
+ goto probe_failed;
+ }
+ }
+
+ retval = ethdev_init(ethdev, init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
+ goto probe_failed;
+ }
+
+ rte_eth_dev_probing_finish(ethdev);
+
+ return retval;
+
+probe_failed:
+ rte_eth_dev_release_port(ethdev);
+ return retval;
+}
+
+int
+rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
+ ethdev_uninit_t ethdev_uninit)
+{
+ int ret;
+
+ ethdev = rte_eth_dev_allocated(ethdev->data->name);
+ if (!ethdev)
+ return -ENODEV;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
+
+ ret = ethdev_uninit(ethdev);
+ if (ret)
+ return ret;
+
+ return rte_eth_dev_release_port(ethdev);
+}
+
+struct rte_eth_dev *
+rte_eth_dev_get_by_name(const char *name)
+{
+ uint16_t pid;
+
+ if (rte_eth_dev_get_port_by_name(name, &pid))
+ return NULL;
+
+ return &rte_eth_devices[pid];
+}
+
+int
+rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+int
+rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+void
+rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_started) {
+ RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
+ dev->data->port_id);
+ return;
+ }
+
+ eth_dev_rx_queue_config(dev, 0);
+ eth_dev_tx_queue_config(dev, 0);
+
+ memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
+}
+
+static int
+eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
+{
+ int state;
+ struct rte_kvargs_pair *pair;
+ char *letter;
+
+ arglist->str = strdup(str_in);
+ if (arglist->str == NULL)
+ return -ENOMEM;
+
+ letter = arglist->str;
+ state = 0;
+ arglist->count = 0;
+ pair = &arglist->pairs[0];
+ while (1) {
+ switch (state) {
+ case 0: /* Initial */
+ if (*letter == '=')
+ return -EINVAL;
+ else if (*letter == '\0')
+ return 0;
+
+ state = 1;
+ pair->key = letter;
+ /* fall-thru */
+
+ case 1: /* Parsing key */
+ if (*letter == '=') {
+ *letter = '\0';
+ pair->value = letter + 1;
+ state = 2;
+ } else if (*letter == ',' || *letter == '\0')
+ return -EINVAL;
+ break;
+
+
+ case 2: /* Parsing value */
+ if (*letter == '[')
+ state = 3;
+ else if (*letter == ',') {
+ *letter = '\0';
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ } else if (*letter == '\0') {
+ letter--;
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ }
+ break;
+
+ case 3: /* Parsing list */
+ if (*letter == ']')
+ state = 2;
+ else if (*letter == '\0')
+ return -EINVAL;
+ break;
+ }
+ letter++;
+ }
+}
+
+int
+rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
+{
+ struct rte_kvargs args;
+ struct rte_kvargs_pair *pair;
+ unsigned int i;
+ int result = 0;
+
+ memset(eth_da, 0, sizeof(*eth_da));
+
+ result = eth_dev_devargs_tokenise(&args, dargs);
+ if (result < 0)
+ goto parse_cleanup;
+
+ for (i = 0; i < args.count; i++) {
+ pair = &args.pairs[i];
+ if (strcmp("representor", pair->key) == 0) {
+ if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
+ RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
+ dargs);
+ result = -1;
+ goto parse_cleanup;
+ }
+ result = rte_eth_devargs_parse_representor_ports(
+ pair->value, eth_da);
+ if (result < 0)
+ goto parse_cleanup;
+ }
+ }
+
+parse_cleanup:
+ if (args.str)
+ free(args.str);
+
+ return result;
+}
+
+static inline int
+eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
+ const char *ring_name)
+{
+ return snprintf(name, len, "eth_p%d_q%d_%s",
+ port_id, queue_id, ring_name);
+}
+
+int
+rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc = 0;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ return -ENAMETOOLONG;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz)
+ rc = rte_memzone_free(mz);
+ else
+ rc = -ENOENT;
+
+ return rc;
+}
+
+const struct rte_memzone *
+rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id, size_t size, unsigned int align,
+ int socket_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz) {
+ if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
+ size > mz->len ||
+ ((uintptr_t)mz->addr & (align - 1)) != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "memzone %s does not justify the requested attributes\n",
+ mz->name);
+ return NULL;
+ }
+
+ return mz;
+ }
+
+ return rte_memzone_reserve_aligned(z_name, size, socket_id,
+ RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+int
+rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
+ peer_info, direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
+ direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
+ struct rte_hairpin_peer_info *cur_info,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* Current queue information is not mandatory. */
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[peer_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
+ cur_info, peer_info, direction);
+}
+
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
+ .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
uint16_t
rte_eth_pkt_burst_dummy(void *queue __rte_unused,
@@ -11,3 +637,135 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
{
return 0;
}
+
+int
+rte_eth_representor_id_get(uint16_t port_id,
+ enum rte_eth_representor_type type,
+ int controller, int pf, int representor_port,
+ uint16_t *repr_id)
+{
+ int ret, n, count;
+ uint32_t i;
+ struct rte_eth_representor_info *info = NULL;
+ size_t size;
+
+ if (type == RTE_ETH_REPRESENTOR_NONE)
+ return 0;
+ if (repr_id == NULL)
+ return -EINVAL;
+
+ /* Get PMD representor range info. */
+ ret = rte_eth_representor_info_get(port_id, NULL);
+ if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
+ controller == -1 && pf == -1) {
+ /* Direct mapping for legacy VF representor. */
+ *repr_id = representor_port;
+ return 0;
+ } else if (ret < 0) {
+ return ret;
+ }
+ n = ret;
+ size = sizeof(*info) + n * sizeof(info->ranges[0]);
+ info = calloc(1, size);
+ if (info == NULL)
+ return -ENOMEM;
+ info->nb_ranges_alloc = n;
+ ret = rte_eth_representor_info_get(port_id, info);
+ if (ret < 0)
+ goto out;
+
+ /* Default controller and pf to caller. */
+ if (controller == -1)
+ controller = info->controller;
+ if (pf == -1)
+ pf = info->pf;
+
+ /* Locate representor ID. */
+ ret = -ENOENT;
+ for (i = 0; i < info->nb_ranges; ++i) {
+ if (info->ranges[i].type != type)
+ continue;
+ if (info->ranges[i].controller != controller)
+ continue;
+ if (info->ranges[i].id_end < info->ranges[i].id_base) {
+ RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
+ port_id, info->ranges[i].id_base,
+ info->ranges[i].id_end, i);
+ continue;
+
+ }
+ count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
+ switch (info->ranges[i].type) {
+ case RTE_ETH_REPRESENTOR_PF:
+ if (pf < info->ranges[i].pf ||
+ pf >= info->ranges[i].pf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (pf - info->ranges[i].pf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_VF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].vf ||
+ representor_port >= info->ranges[i].vf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].vf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_SF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].sf ||
+ representor_port >= info->ranges[i].sf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].sf);
+ ret = 0;
+ goto out;
+ default:
+ break;
+ }
+ }
+out:
+ free(info);
+ return ret;
+}
+
+int
+rte_eth_switch_domain_alloc(uint16_t *domain_id)
+{
+ uint16_t i;
+
+ *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (eth_dev_switch_domains[i].state ==
+ RTE_ETH_SWITCH_DOMAIN_UNUSED) {
+ eth_dev_switch_domains[i].state =
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
+ *domain_id = i;
+ return 0;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+int
+rte_eth_switch_domain_free(uint16_t domain_id)
+{
+ if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
+ domain_id >= RTE_MAX_ETHPORTS)
+ return -EINVAL;
+
+ if (eth_dev_switch_domains[domain_id].state !=
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
+ return -EINVAL;
+
+ eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
+
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 8fca20c7d45b..84dc0b320ed0 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -3,10 +3,22 @@
*/
#include <rte_debug.h>
+
#include "rte_ethdev.h"
#include "ethdev_driver.h"
#include "ethdev_private.h"
+static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
+
+/* Shared memory between primary and secondary processes. */
+struct eth_dev_shared *eth_dev_shared_data;
+
+/* spinlock for shared data allocation */
+static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
+
+/* spinlock for eth device callbacks */
+rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
uint16_t
eth_dev_to_id(const struct rte_eth_dev *dev)
{
@@ -302,3 +314,122 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
return nb_pkts;
}
+
+void
+eth_dev_shared_data_prepare(void)
+{
+ const unsigned int flags = 0;
+ const struct rte_memzone *mz;
+
+ rte_spinlock_lock(ð_dev_shared_data_lock);
+
+ if (eth_dev_shared_data == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* Allocate port data and ownership shared memory. */
+ mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
+ sizeof(*eth_dev_shared_data),
+ rte_socket_id(), flags);
+ } else
+ mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
+ if (mz == NULL)
+ rte_panic("Cannot allocate ethdev shared data\n");
+
+ eth_dev_shared_data = mz->addr;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ eth_dev_shared_data->next_owner_id =
+ RTE_ETH_DEV_NO_OWNER + 1;
+ rte_spinlock_init(ð_dev_shared_data->ownership_lock);
+ memset(eth_dev_shared_data->data, 0,
+ sizeof(eth_dev_shared_data->data));
+ }
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data_lock);
+}
+
+void
+eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **rxq = dev->data->rx_queues;
+
+ if (rxq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->rx_queue_release != NULL)
+ (*dev->dev_ops->rx_queue_release)(dev, qid);
+ rxq[qid] = NULL;
+}
+
+void
+eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **txq = dev->data->tx_queues;
+
+ if (txq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->tx_queue_release != NULL)
+ (*dev->dev_ops->tx_queue_release)(dev, qid);
+ txq[qid] = NULL;
+}
+
+int
+eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_rx_queues;
+ unsigned int i;
+
+ if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
+ sizeof(dev->data->rx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->rx_queues == NULL) {
+ dev->data->nb_rx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ rte_free(dev->data->rx_queues);
+ dev->data->rx_queues = NULL;
+ }
+ dev->data->nb_rx_queues = nb_queues;
+ return 0;
+}
+
+int
+eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_tx_queues;
+ unsigned int i;
+
+ if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
+ sizeof(dev->data->tx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->tx_queues == NULL) {
+ dev->data->nb_tx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ rte_free(dev->data->tx_queues);
+ dev->data->tx_queues = NULL;
+ }
+ dev->data->nb_tx_queues = nb_queues;
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index cc91025e8d9b..cc9879907ce5 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -5,10 +5,38 @@
#ifndef _ETH_PRIVATE_H_
#define _ETH_PRIVATE_H_
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
#include <rte_os_shim.h>
#include "rte_ethdev.h"
+struct eth_dev_shared {
+ uint64_t next_owner_id;
+ rte_spinlock_t ownership_lock;
+ struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
+};
+
+extern struct eth_dev_shared *eth_dev_shared_data;
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_eth_dev_callback {
+ TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
+ rte_eth_dev_cb_fn cb_fn; /**< Callback address */
+ void *cb_arg; /**< Parameter for callback */
+ void *ret_param; /**< Return parameter */
+ enum rte_eth_event_type event; /**< Interrupt event type */
+ uint32_t active; /**< Callback is executing */
+};
+
+extern rte_spinlock_t eth_dev_cb_lock;
+
/*
* Convert rte_eth_dev pointer to port ID.
* NULL will be translated to RTE_MAX_ETHPORTS.
@@ -33,4 +61,12 @@ void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
const struct rte_eth_dev *dev);
+
+void eth_dev_shared_data_prepare(void);
+
+void eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid);
+void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid);
+int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 2a479bea2128..70c850a2f18a 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -30,7 +30,6 @@
#include <rte_errno.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_kvargs.h>
#include <rte_class.h>
#include <rte_ether.h>
#include <rte_telemetry.h>
@@ -41,37 +40,23 @@
#include "ethdev_profile.h"
#include "ethdev_private.h"
-static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
/* public fast-path API */
struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-/* spinlock for eth device callbacks */
-static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
-
/* spinlock for add/remove Rx callbacks */
static rte_spinlock_t eth_dev_rx_cb_lock = RTE_SPINLOCK_INITIALIZER;
/* spinlock for add/remove Tx callbacks */
static rte_spinlock_t eth_dev_tx_cb_lock = RTE_SPINLOCK_INITIALIZER;
-/* spinlock for shared data allocation */
-static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
-
/* store statistics names and its offset in stats structure */
struct rte_eth_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
unsigned offset;
};
-/* Shared memory between primary and secondary processes. */
-static struct {
- uint64_t next_owner_id;
- rte_spinlock_t ownership_lock;
- struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
-} *eth_dev_shared_data;
-
static const struct rte_eth_xstats_name_off eth_dev_stats_strings[] = {
{"rx_good_packets", offsetof(struct rte_eth_stats, ipackets)},
{"tx_good_packets", offsetof(struct rte_eth_stats, opackets)},
@@ -175,21 +160,6 @@ static const struct {
{RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP, "FLOW_SHARED_OBJECT_KEEP"},
};
-/**
- * The user application callback description.
- *
- * It contains callback address to be registered by user application,
- * the pointer to the parameters for callback, and the event type.
- */
-struct rte_eth_dev_callback {
- TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
- rte_eth_dev_cb_fn cb_fn; /**< Callback address */
- void *cb_arg; /**< Parameter for callback */
- void *ret_param; /**< Return parameter */
- enum rte_eth_event_type event; /**< Interrupt event type */
- uint32_t active; /**< Callback is executing */
-};
-
enum {
STAT_QMAP_TX = 0,
STAT_QMAP_RX
@@ -399,227 +369,12 @@ rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id)
rte_eth_devices[ref_port_id].device);
}
-static void
-eth_dev_shared_data_prepare(void)
-{
- const unsigned flags = 0;
- const struct rte_memzone *mz;
-
- rte_spinlock_lock(ð_dev_shared_data_lock);
-
- if (eth_dev_shared_data == NULL) {
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Allocate port data and ownership shared memory. */
- mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
- sizeof(*eth_dev_shared_data),
- rte_socket_id(), flags);
- } else
- mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
- if (mz == NULL)
- rte_panic("Cannot allocate ethdev shared data\n");
-
- eth_dev_shared_data = mz->addr;
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- eth_dev_shared_data->next_owner_id =
- RTE_ETH_DEV_NO_OWNER + 1;
- rte_spinlock_init(ð_dev_shared_data->ownership_lock);
- memset(eth_dev_shared_data->data, 0,
- sizeof(eth_dev_shared_data->data));
- }
- }
-
- rte_spinlock_unlock(ð_dev_shared_data_lock);
-}
-
static bool
eth_dev_is_allocated(const struct rte_eth_dev *ethdev)
{
return ethdev->data->name[0] != '\0';
}
-static struct rte_eth_dev *
-eth_dev_allocated(const char *name)
-{
- uint16_t i;
-
- RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].data != NULL &&
- strcmp(rte_eth_devices[i].data->name, name) == 0)
- return &rte_eth_devices[i];
- }
- return NULL;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocated(const char *name)
-{
- struct rte_eth_dev *ethdev;
-
- eth_dev_shared_data_prepare();
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- ethdev = eth_dev_allocated(name);
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return ethdev;
-}
-
-static uint16_t
-eth_dev_find_free_port(void)
-{
- uint16_t i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- /* Using shared name field to find a free port. */
- if (eth_dev_shared_data->data[i].name[0] == '\0') {
- RTE_ASSERT(rte_eth_devices[i].state ==
- RTE_ETH_DEV_UNUSED);
- return i;
- }
- }
- return RTE_MAX_ETHPORTS;
-}
-
-static struct rte_eth_dev *
-eth_dev_get(uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
-
- eth_dev->data = ð_dev_shared_data->data[port_id];
-
- return eth_dev;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocate(const char *name)
-{
- uint16_t port_id;
- struct rte_eth_dev *eth_dev = NULL;
- size_t name_len;
-
- name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
- if (name_len == 0) {
- RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
- return NULL;
- }
-
- if (name_len >= RTE_ETH_NAME_MAX_LEN) {
- RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
- return NULL;
- }
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port creation between primary and secondary threads. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- if (eth_dev_allocated(name) != NULL) {
- RTE_ETHDEV_LOG(ERR,
- "Ethernet device with name %s already allocated\n",
- name);
- goto unlock;
- }
-
- port_id = eth_dev_find_free_port();
- if (port_id == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Reached maximum number of Ethernet ports\n");
- goto unlock;
- }
-
- eth_dev = eth_dev_get(port_id);
- strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
- eth_dev->data->port_id = port_id;
- eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
- eth_dev->data->mtu = RTE_ETHER_MTU;
- pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
-
-unlock:
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return eth_dev;
-}
-
-/*
- * Attach to a port already registered by the primary process, which
- * makes sure that the same device would have the same port ID both
- * in the primary and secondary process.
- */
-struct rte_eth_dev *
-rte_eth_dev_attach_secondary(const char *name)
-{
- uint16_t i;
- struct rte_eth_dev *eth_dev = NULL;
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port attachment to primary port creation and release. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
- break;
- }
- if (i == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Device %s is not driven by the primary process\n",
- name);
- } else {
- eth_dev = eth_dev_get(i);
- RTE_ASSERT(eth_dev->data->port_id == i);
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
- return eth_dev;
-}
-
-int
-rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
-{
- if (eth_dev == NULL)
- return -EINVAL;
-
- eth_dev_shared_data_prepare();
-
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
- rte_eth_dev_callback_process(eth_dev,
- RTE_ETH_EVENT_DESTROY, NULL);
-
- eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- eth_dev->state = RTE_ETH_DEV_UNUSED;
- eth_dev->device = NULL;
- eth_dev->process_private = NULL;
- eth_dev->intr_handle = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- eth_dev->tx_pkt_prepare = NULL;
- eth_dev->rx_queue_count = NULL;
- eth_dev->rx_descriptor_status = NULL;
- eth_dev->tx_descriptor_status = NULL;
- eth_dev->dev_ops = NULL;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- rte_free(eth_dev->data->rx_queues);
- rte_free(eth_dev->data->tx_queues);
- rte_free(eth_dev->data->mac_addrs);
- rte_free(eth_dev->data->hash_mac_addrs);
- rte_free(eth_dev->data->dev_private);
- pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
- memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return 0;
-}
-
int
rte_eth_dev_is_valid_port(uint16_t port_id)
{
@@ -894,17 +649,6 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id)
return -ENODEV;
}
-struct rte_eth_dev *
-rte_eth_dev_get_by_name(const char *name)
-{
- uint16_t pid;
-
- if (rte_eth_dev_get_port_by_name(name, &pid))
- return NULL;
-
- return &rte_eth_devices[pid];
-}
-
static int
eth_err(uint16_t port_id, int ret)
{
@@ -915,62 +659,6 @@ eth_err(uint16_t port_id, int ret)
return ret;
}
-static void
-eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **rxq = dev->data->rx_queues;
-
- if (rxq[qid] == NULL)
- return;
-
- if (dev->dev_ops->rx_queue_release != NULL)
- (*dev->dev_ops->rx_queue_release)(dev, qid);
- rxq[qid] = NULL;
-}
-
-static void
-eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **txq = dev->data->tx_queues;
-
- if (txq[qid] == NULL)
- return;
-
- if (dev->dev_ops->tx_queue_release != NULL)
- (*dev->dev_ops->tx_queue_release)(dev, qid);
- txq[qid] = NULL;
-}
-
-static int
-eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_rx_queues;
- unsigned i;
-
- if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
- sizeof(dev->data->rx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
- dev->data->nb_rx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- rte_free(dev->data->rx_queues);
- dev->data->rx_queues = NULL;
- }
- dev->data->nb_rx_queues = nb_queues;
- return 0;
-}
-
static int
eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
@@ -1161,36 +849,6 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
return eth_err(port_id, dev->dev_ops->tx_queue_stop(dev, tx_queue_id));
}
-static int
-eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_tx_queues;
- unsigned i;
-
- if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
- sizeof(dev->data->tx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
- dev->data->nb_tx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- rte_free(dev->data->tx_queues);
- dev->data->tx_queues = NULL;
- }
- dev->data->nb_tx_queues = nb_queues;
- return 0;
-}
-
uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
@@ -1682,21 +1340,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return ret;
}
-void
-rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
-{
- if (dev->data->dev_started) {
- RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
- dev->data->port_id);
- return;
- }
-
- eth_dev_rx_queue_config(dev, 0);
- eth_dev_tx_queue_config(dev, 0);
-
- memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
-}
-
static void
eth_dev_mac_restore(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -4914,52 +4557,6 @@ rte_eth_dev_callback_unregister(uint16_t port_id,
return ret;
}
-int
-rte_eth_dev_callback_process(struct rte_eth_dev *dev,
- enum rte_eth_event_type event, void *ret_param)
-{
- struct rte_eth_dev_callback *cb_lst;
- struct rte_eth_dev_callback dev_cb;
- int rc = 0;
-
- rte_spinlock_lock(ð_dev_cb_lock);
- TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
- if (cb_lst->cb_fn == NULL || cb_lst->event != event)
- continue;
- dev_cb = *cb_lst;
- cb_lst->active = 1;
- if (ret_param != NULL)
- dev_cb.ret_param = ret_param;
-
- rte_spinlock_unlock(ð_dev_cb_lock);
- rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
- dev_cb.cb_arg, dev_cb.ret_param);
- rte_spinlock_lock(ð_dev_cb_lock);
- cb_lst->active = 0;
- }
- rte_spinlock_unlock(ð_dev_cb_lock);
- return rc;
-}
-
-void
-rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
-{
- if (dev == NULL)
- return;
-
- /*
- * for secondary process, at that point we expect device
- * to be already 'usable', so shared data and all function pointers
- * for fast-path devops have to be setup properly inside rte_eth_dev.
- */
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
-
- rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
-
- dev->state = RTE_ETH_DEV_ATTACHED;
-}
-
int
rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
{
@@ -5032,156 +4629,6 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
return fd;
}
-static inline int
-eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
- const char *ring_name)
-{
- return snprintf(name, len, "eth_p%d_q%d_%s",
- port_id, queue_id, ring_name);
-}
-
-const struct rte_memzone *
-rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id, size_t size, unsigned align,
- int socket_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- rte_errno = ENAMETOOLONG;
- return NULL;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz) {
- if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
- size > mz->len ||
- ((uintptr_t)mz->addr & (align - 1)) != 0) {
- RTE_ETHDEV_LOG(ERR,
- "memzone %s does not justify the requested attributes\n",
- mz->name);
- return NULL;
- }
-
- return mz;
- }
-
- return rte_memzone_reserve_aligned(z_name, size, socket_id,
- RTE_MEMZONE_IOVA_CONTIG, align);
-}
-
-int
-rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc = 0;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- return -ENAMETOOLONG;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz)
- rc = rte_memzone_free(mz);
- else
- rc = -ENOENT;
-
- return rc;
-}
-
-int
-rte_eth_dev_create(struct rte_device *device, const char *name,
- size_t priv_data_size,
- ethdev_bus_specific_init ethdev_bus_specific_init,
- void *bus_init_params,
- ethdev_init_t ethdev_init, void *init_params)
-{
- struct rte_eth_dev *ethdev;
- int retval;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- ethdev = rte_eth_dev_allocate(name);
- if (!ethdev)
- return -ENODEV;
-
- if (priv_data_size) {
- ethdev->data->dev_private = rte_zmalloc_socket(
- name, priv_data_size, RTE_CACHE_LINE_SIZE,
- device->numa_node);
-
- if (!ethdev->data->dev_private) {
- RTE_ETHDEV_LOG(ERR,
- "failed to allocate private data\n");
- retval = -ENOMEM;
- goto probe_failed;
- }
- }
- } else {
- ethdev = rte_eth_dev_attach_secondary(name);
- if (!ethdev) {
- RTE_ETHDEV_LOG(ERR,
- "secondary process attach failed, ethdev doesn't exist\n");
- return -ENODEV;
- }
- }
-
- ethdev->device = device;
-
- if (ethdev_bus_specific_init) {
- retval = ethdev_bus_specific_init(ethdev, bus_init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR,
- "ethdev bus specific initialisation failed\n");
- goto probe_failed;
- }
- }
-
- retval = ethdev_init(ethdev, init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
- goto probe_failed;
- }
-
- rte_eth_dev_probing_finish(ethdev);
-
- return retval;
-
-probe_failed:
- rte_eth_dev_release_port(ethdev);
- return retval;
-}
-
-int
-rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
- ethdev_uninit_t ethdev_uninit)
-{
- int ret;
-
- ethdev = rte_eth_dev_allocated(ethdev->data->name);
- if (!ethdev)
- return -ENODEV;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
-
- ret = ethdev_uninit(ethdev);
- if (ret)
- return ret;
-
- return rte_eth_dev_release_port(ethdev);
-}
-
int
rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
int epfd, int op, void *data)
@@ -6005,22 +5452,6 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id,
return eth_err(port_id, (*dev->dev_ops->hairpin_cap_get)(dev, cap));
}
-int
-rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
-int
-rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
int
rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
{
@@ -6042,255 +5473,6 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
return (*dev->dev_ops->pool_ops_supported)(dev, pool);
}
-/**
- * A set of values to describe the possible states of a switch domain.
- */
-enum rte_eth_switch_domain_state {
- RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED
-};
-
-/**
- * Array of switch domains available for allocation. Array is sized to
- * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
- * ethdev ports in a single process.
- */
-static struct rte_eth_dev_switch {
- enum rte_eth_switch_domain_state state;
-} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
-
-int
-rte_eth_switch_domain_alloc(uint16_t *domain_id)
-{
- uint16_t i;
-
- *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (eth_dev_switch_domains[i].state ==
- RTE_ETH_SWITCH_DOMAIN_UNUSED) {
- eth_dev_switch_domains[i].state =
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
- *domain_id = i;
- return 0;
- }
- }
-
- return -ENOSPC;
-}
-
-int
-rte_eth_switch_domain_free(uint16_t domain_id)
-{
- if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
- domain_id >= RTE_MAX_ETHPORTS)
- return -EINVAL;
-
- if (eth_dev_switch_domains[domain_id].state !=
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
- return -EINVAL;
-
- eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
-
- return 0;
-}
-
-static int
-eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
-{
- int state;
- struct rte_kvargs_pair *pair;
- char *letter;
-
- arglist->str = strdup(str_in);
- if (arglist->str == NULL)
- return -ENOMEM;
-
- letter = arglist->str;
- state = 0;
- arglist->count = 0;
- pair = &arglist->pairs[0];
- while (1) {
- switch (state) {
- case 0: /* Initial */
- if (*letter == '=')
- return -EINVAL;
- else if (*letter == '\0')
- return 0;
-
- state = 1;
- pair->key = letter;
- /* fall-thru */
-
- case 1: /* Parsing key */
- if (*letter == '=') {
- *letter = '\0';
- pair->value = letter + 1;
- state = 2;
- } else if (*letter == ',' || *letter == '\0')
- return -EINVAL;
- break;
-
-
- case 2: /* Parsing value */
- if (*letter == '[')
- state = 3;
- else if (*letter == ',') {
- *letter = '\0';
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- } else if (*letter == '\0') {
- letter--;
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- }
- break;
-
- case 3: /* Parsing list */
- if (*letter == ']')
- state = 2;
- else if (*letter == '\0')
- return -EINVAL;
- break;
- }
- letter++;
- }
-}
-
-int
-rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
-{
- struct rte_kvargs args;
- struct rte_kvargs_pair *pair;
- unsigned int i;
- int result = 0;
-
- memset(eth_da, 0, sizeof(*eth_da));
-
- result = eth_dev_devargs_tokenise(&args, dargs);
- if (result < 0)
- goto parse_cleanup;
-
- for (i = 0; i < args.count; i++) {
- pair = &args.pairs[i];
- if (strcmp("representor", pair->key) == 0) {
- if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
- RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
- dargs);
- result = -1;
- goto parse_cleanup;
- }
- result = rte_eth_devargs_parse_representor_ports(
- pair->value, eth_da);
- if (result < 0)
- goto parse_cleanup;
- }
- }
-
-parse_cleanup:
- if (args.str)
- free(args.str);
-
- return result;
-}
-
-int
-rte_eth_representor_id_get(uint16_t port_id,
- enum rte_eth_representor_type type,
- int controller, int pf, int representor_port,
- uint16_t *repr_id)
-{
- int ret, n, count;
- uint32_t i;
- struct rte_eth_representor_info *info = NULL;
- size_t size;
-
- if (type == RTE_ETH_REPRESENTOR_NONE)
- return 0;
- if (repr_id == NULL)
- return -EINVAL;
-
- /* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(port_id, NULL);
- if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
- controller == -1 && pf == -1) {
- /* Direct mapping for legacy VF representor. */
- *repr_id = representor_port;
- return 0;
- } else if (ret < 0) {
- return ret;
- }
- n = ret;
- size = sizeof(*info) + n * sizeof(info->ranges[0]);
- info = calloc(1, size);
- if (info == NULL)
- return -ENOMEM;
- info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(port_id, info);
- if (ret < 0)
- goto out;
-
- /* Default controller and pf to caller. */
- if (controller == -1)
- controller = info->controller;
- if (pf == -1)
- pf = info->pf;
-
- /* Locate representor ID. */
- ret = -ENOENT;
- for (i = 0; i < info->nb_ranges; ++i) {
- if (info->ranges[i].type != type)
- continue;
- if (info->ranges[i].controller != controller)
- continue;
- if (info->ranges[i].id_end < info->ranges[i].id_base) {
- RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- port_id, info->ranges[i].id_base,
- info->ranges[i].id_end, i);
- continue;
-
- }
- count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
- switch (info->ranges[i].type) {
- case RTE_ETH_REPRESENTOR_PF:
- if (pf < info->ranges[i].pf ||
- pf >= info->ranges[i].pf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (pf - info->ranges[i].pf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_VF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].vf ||
- representor_port >= info->ranges[i].vf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].vf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_SF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].sf ||
- representor_port >= info->ranges[i].sf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].sf);
- ret = 0;
- goto out;
- default:
- break;
- }
- }
-out:
- free(info);
- return ret;
-}
-
static int
eth_dev_handle_port_list(const char *cmd __rte_unused,
const char *params __rte_unused,
@@ -6533,61 +5715,6 @@ eth_dev_handle_port_info(const char *cmd __rte_unused,
return 0;
}
-int
-rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
- struct rte_hairpin_peer_info *cur_info,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* Current queue information is not mandatory. */
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[peer_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
- cur_info, peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
- peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
- direction);
-}
-
int
rte_eth_representor_info_get(uint16_t port_id,
struct rte_eth_representor_info *info)
@@ -6722,34 +5849,6 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
-int
-rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
-{
- static const struct rte_mbuf_dynfield field_desc = {
- .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
- .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
- .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
- };
- static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
- .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
- };
- int offset;
-
- offset = rte_mbuf_dynfield_register(&field_desc);
- if (offset < 0)
- return -1;
- if (field_offset != NULL)
- *field_offset = offset;
-
- offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
- if (offset < 0)
- return -1;
- if (flag_offset != NULL)
- *flag_offset = offset;
-
- return 0;
-}
-
int
rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
{
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 1/2] ethdev: introduce generic dummy packet burst function
2022-02-11 17:14 ` [PATCH v3 1/2] " Ferruh Yigit
2022-02-11 17:14 ` [PATCH v3 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
@ 2022-02-11 18:03 ` Thomas Monjalon
1 sibling, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2022-02-11 18:03 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Ciara Loftus, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, John Daley, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Andrew Rybchenko,
Ray Kinsella, dev, Morten Brørup
11/02/2022 18:14, Ferruh Yigit:
> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>
> These dummy functions are very simple, introduce a common function in
> the ethdev and update drivers to use it instead of each driver having
> its own functions.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Looks to be a good cleanup, thanks.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 2/2] ethdev: move driver interface functions to its own file
2022-02-11 17:14 ` [PATCH v3 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
@ 2022-02-11 18:09 ` Thomas Monjalon
2022-02-11 18:39 ` Ferruh Yigit
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2022-02-11 18:09 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Andrew Rybchenko, Anatoly Burakov, dev
11/02/2022 18:14, Ferruh Yigit:
> Relevant functions moved to ethdev_driver.c.
> No functional change.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++
> lib/ethdev/ethdev_private.c | 131 ++++++
> lib/ethdev/ethdev_private.h | 36 ++
> lib/ethdev/rte_ethdev.c | 901 ------------------------------------
> 4 files changed, 925 insertions(+), 901 deletions(-)
Please could you add more details in the commit log while merging?
We need to know that they are internal functions used only by drivers.
Also it would be interesting to explain the difference between
ethdev_driver.c and ethdev_private.h.
With this info,
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v4 1/2] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
` (4 preceding siblings ...)
2022-02-11 17:14 ` [PATCH v3 1/2] " Ferruh Yigit
@ 2022-02-11 18:38 ` Ferruh Yigit
2022-02-11 18:38 ` [PATCH v4 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
6 siblings, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 18:38 UTC (permalink / raw)
To: Ciara Loftus, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, John Daley, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, Ray Kinsella
Cc: dev, Ferruh Yigit, Morten Brørup
Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
These dummy functions are very simple, introduce a common function in
the ethdev and update drivers to use it instead of each driver having
its own functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
Cc: Ciara Loftus <ciara.loftus@intel.com>
v2:
* Convert inline function to actual function in new ethdev_driver.c
file. This is because of functions pointer comparisons in PMDs.
PMD interface of ethdev can be moved to 'ethdev_driver.c' later.
v3:
* updated af_xdp too
v4:
* Commit log updated and checkpatch warning fixed
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 26 ++-------------
drivers/net/ark/ark_ethdev.c | 8 ++---
drivers/net/ark/ark_ethdev_rx.c | 9 -----
drivers/net/ark/ark_ethdev_rx.h | 2 --
drivers/net/ark/ark_ethdev_tx.c | 9 -----
drivers/net/ark/ark_ethdev_tx.h | 3 --
drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++-----
drivers/net/bnxt/bnxt.h | 4 ---
drivers/net/bnxt/bnxt_cpr.c | 4 +--
drivers/net/bnxt/bnxt_rxr.c | 14 --------
drivers/net/bnxt/bnxt_txr.c | 14 --------
drivers/net/cnxk/cnxk_ethdev.c | 14 ++------
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 25 --------------
drivers/net/enic/enic.h | 3 --
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_rxtx.c | 11 ------
drivers/net/hns3/hns3_rxtx.c | 18 +++-------
drivers/net/hns3/hns3_rxtx.h | 3 --
drivers/net/mlx4/mlx4.c | 8 ++---
drivers/net/mlx4/mlx4_mp.c | 4 +--
drivers/net/mlx4/mlx4_rxtx.c | 52 -----------------------------
drivers/net/mlx4/mlx4_rxtx.h | 4 ---
drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +--
drivers/net/mlx5/linux/mlx5_os.c | 4 +--
drivers/net/mlx5/mlx5.c | 4 +--
drivers/net/mlx5/mlx5_rx.c | 27 +--------------
drivers/net/mlx5/mlx5_rx.h | 2 --
drivers/net/mlx5/mlx5_trigger.c | 4 +--
drivers/net/mlx5/mlx5_tx.c | 25 --------------
drivers/net/mlx5/mlx5_tx.h | 2 --
drivers/net/mlx5/windows/mlx5_os.c | 4 +--
drivers/net/pfe/pfe_ethdev.c | 20 ++---------
drivers/net/qede/qede_ethdev.c | 4 +--
drivers/net/qede/qede_rxtx.c | 9 -----
drivers/net/qede/qede_rxtx.h | 3 --
lib/ethdev/ethdev_driver.c | 13 ++++++++
lib/ethdev/ethdev_driver.h | 17 ++++++++++
lib/ethdev/meson.build | 1 +
lib/ethdev/version.map | 1 +
42 files changed, 73 insertions(+), 325 deletions(-)
create mode 100644 lib/ethdev/ethdev_driver.c
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 4a37c11960e1..6ac710c6bdc6 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -1916,28 +1916,6 @@ afxdp_mp_send_fds(const struct rte_mp_msg *request, const void *peer)
return 0;
}
-/* Secondary process rx function. RX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_rx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
-/* Secondary process tx function. TX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_tx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
static int
rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
{
@@ -1961,8 +1939,8 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
}
eth_dev->dev_ops = &ops;
eth_dev->device = &dev->device;
- eth_dev->rx_pkt_burst = eth_af_xdp_rx_noop;
- eth_dev->tx_pkt_burst = eth_af_xdp_tx_noop;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->process_private = (struct pmd_process_private *)
rte_zmalloc_socket(name,
sizeof(struct pmd_process_private),
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3f023..230a1272e986 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Use dummy function until setup */
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
@@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
if (ark->start_pg)
ark_pktgen_pause(ark->pg);
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* STOP TX Side */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index 98658ce621e2..37a88cbedee4 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_recv_pkts_noop(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 859fcf1e6f71..f64b3dd137b3 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
-uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void eth_ark_dev_rx_queue_release(void *rx_queue);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 676e4115d3bf..abdce6a8cc0d 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue,
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_xmit_pkts_noop(void *vtxq __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 12c71a7158a9..7134dbfeed81 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -10,9 +10,6 @@
#include <ethdev_driver.h>
-uint16_t eth_ark_xmit_pkts_noop(void *vtxq,
- struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_xmit_pkts(void *vtxq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 66b0512c8695..cb5733c5972b 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
-static uint16_t
-bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy;
- dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
}
void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0cbb58b2cf3e..44724a9dfe91 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
int wait_to_complete);
-uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
-uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
extern const struct rte_flow_ops bnxt_flow_ops;
diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 9b9285b79903..99af0f9e87ee 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp)
void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
{
- eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b60c2470f39e..5a9cf48e6739 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx_pkts;
}
-/*
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_recv_pkts(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
void bnxt_free_rx_rings(struct bnxt *bp)
{
int i;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 3b8f2382f92e..7a7196a23731 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx_pkts;
}
-/*
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct bnxt *bp = dev->data->dev_private;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 53dfb5eae80e..c6a9ada05bb4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
return rc;
}
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
static void
nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
@@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
* which caused app crash since rx/tx burst is still
* on different lcores
*/
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 379daec5f4e8..5be4fef8fe68 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
}
/*changing tx burst function to avoid any more enqueues */
- dev->tx_pkt_burst = dummy_dev_tx;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* Loop while dpni_disable() attempts to drain the egress FQs
* and confirm them back to us.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 1b49f43103a7..e79a7fc2e286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -264,7 +264,6 @@ __rte_internal
uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
struct rte_mbuf **bufs, uint16_t nb_pkts);
-uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 81b28e20cb47..b8844fbdf107 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
-{
- (void)queue;
- (void)bufs;
- (void)nb_pkts;
- return 0;
-}
-
#if defined(RTE_TOOLCHAIN_GCC)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d5493c98345d..163a1f037e26 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t enic_dummy_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 163be09809b1..a8d470e8ac93 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev)
RTE_PTYPE_UNKNOWN
};
- if (dev->rx_pkt_burst != enic_dummy_recv_pkts &&
+ if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy &&
dev->rx_pkt_burst != NULL) {
struct enic *enic = pmd_priv(dev);
if (enic->overlay_offload)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 97d97ea793f2..9f351de72eb4 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
/* replace Rx function with a no-op to avoid getting stale pkts */
- eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 74a90694c718..7a66d72275d9 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -31,17 +31,6 @@
#define rte_packet_prefetch(p) do {} while (0)
#endif
-/* dummy receive function to replace actual function in
- * order to do safe reconfiguration operations.
- */
-uint16_t
-enic_dummy_recv_pkts(__rte_unused void *rx_queue,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static inline uint16_t
enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts, const bool use_64b_desc)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3b72c2375a60..8dc6cfac704d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
return hns3_xmit_pkts;
}
-uint16_t
-hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- return 0;
-}
-
static void
hns3_trace_rxtx_function(struct rte_eth_dev *dev)
{
@@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev);
eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status;
eth_dev->tx_pkt_burst = hw->set_link_down ?
- hns3_dummy_rxtx_burst :
+ rte_eth_pkt_burst_dummy :
hns3_get_tx_function(eth_dev, &prep);
eth_dev->tx_pkt_prepare = prep;
eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status;
hns3_trace_rxtx_function(eth_dev);
} else {
- eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
- eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_prepare = NULL;
}
@@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts)
return hns3_tx_done_cleanup_full(q, free_cnt);
- else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst)
+ else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy)
return 0;
else
return -ENOTSUP;
@@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw)
void
hns3_stop_tx_datapath(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 094b65b7de70..a000318357ab 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev);
void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev,
eth_tx_prep_t *prep);
-uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id);
void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id,
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 3f3c4a7c7214..910b76a92c42 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
return 0;
DEBUG("%p: detaching flows from all RX queues", (void *)dev);
priv->started = 0;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
@@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
DEBUG("%p: closing device \"%s\"",
(void *)dev,
((priv->ctx != NULL) ? priv->ctx->device->name : ""));
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
index 8fcfb5490ee9..1da64910aadd 100644
--- a/drivers/net/mlx4/mlx4_mp.c
+++ b/drivers/net/mlx4/mlx4_mp.c
@@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
break;
case MLX4_MP_REQ_STOP_RXTX:
INFO("port %u stopping datapath", dev->data->port_id);
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(dev, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ed9e41fcdea9..059e432a63fc 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq->stats.ipackets += i;
return i;
}
-
-/**
- * Dummy DPDK callback for Tx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to Tx queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_txq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
-
-/**
- * Dummy DPDK callback for Rx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to Rx queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_rxq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
index 83e9534cd0a7..70f3cd868058 100644
--- a/drivers/net/mlx4/mlx4_rxtx.h
+++ b/drivers/net/mlx4/mlx4_rxtx.h
@@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
uint16_t pkts_n);
uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
-uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
/* mlx4_txq.c */
diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index c448a3e9eb87..e607089e0e20 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
break;
case MLX5_MP_REQ_STOP_RXTX:
DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id);
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(&priv->mp_id, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aecdc5a68abb..bbe05bb837e0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 67eda41a60a5..5571e9067787 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_action_handle_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index f388fcc31395..11ea935d72f0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
- dev->rx_pkt_burst == removed_rx_burst) {
+ dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
rte_errno = ENOTSUP;
return -rte_errno;
}
@@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
return i;
}
-/**
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to RX queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-removed_rx_burst(void *dpdk_rxq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/*
* Vectorized Rx routines are not compiled in when required vector instructions
* are not supported on a target architecture.
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index cb5d51340db7..7e417819f7e8 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec);
void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c9c0a4fff8..3a59237b1a7a 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
dev->data->dev_started = 0;
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf2096753..8453b2701a9f 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
return 0;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-removed_tx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/**
* Update completion queue consuming index via doorbell
* and flush the completed data buffers.
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 398cadfeaa46..c4b8271f6fb3 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev);
/* mlx5_tx.c */
-uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx __rte_unused);
int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index ac0af0ff7d43..7f3532426f1f 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index edf32aa70da6..c2991ab1ccaa 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkts;
}
-static uint16_t
-pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
- __rte_unused struct rte_mbuf **tx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-static uint16_t
-pfe_dummy_recv_pkts(__rte_unused void *rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static int
pfe_eth_open(struct rte_eth_dev *dev)
{
@@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/)
gemac_disable(priv->EMAC_baseaddr);
gpi_disable(priv->GPI_baseaddr);
- dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
- dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return 0;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1122a297e6b..ea6b71f09355 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
bool use_tx_offload = false;
if (is_dummy) {
- dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
- dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 7088c57b501d..85784f4a82a6 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return eng0_pkts + eng1_pkts;
}
-uint16_t
-qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-
/* this function does a fake walk through over completion queue
* to calculate number of BDs used by HW.
* At the end, it restores the state of completion queue.
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 11ed1d9b9c50..013a4a07c716 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t
qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
- struct rte_mbuf **pkts,
- uint16_t nb_pkts);
int qede_start_queues(struct rte_eth_dev *eth_dev);
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
new file mode 100644
index 000000000000..fb7323f4d327
--- /dev/null
+++ b/lib/ethdev/ethdev_driver.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include "ethdev_driver.h"
+
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused)
+{
+ return 0;
+}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 617b450d5763..8de8e1c67113 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1509,6 +1509,23 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev,
*dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
}
+/**
+ * @internal
+ * Dummy DPDK callback for Rx/Tx packet burst.
+ *
+ * @param queue
+ * Pointer to Rx/Tx queue
+ * @param pkts
+ * Packet array
+ * @param nb_pkts
+ * Number of packets in packet array
+ */
+__rte_internal
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused);
+
/**
* Allocate an unique switch domain identifier.
*
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 0205c853df53..a094585bf715 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -2,6 +2,7 @@
# Copyright(c) 2017 Intel Corporation
sources = files(
+ 'ethdev_driver.c',
'ethdev_private.c',
'ethdev_profile.c',
'ethdev_trace_points.c',
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 1a43282ce45d..d5cc56a56023 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -289,6 +289,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
rte_eth_ip_reassembly_dynfield_register;
+ rte_eth_pkt_burst_dummy;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v4 2/2] ethdev: move driver interface functions to its own file
2022-02-11 18:38 ` [PATCH v4 " Ferruh Yigit
@ 2022-02-11 18:38 ` Ferruh Yigit
2022-02-11 18:55 ` Thomas Monjalon
0 siblings, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 18:38 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko, Anatoly Burakov; +Cc: dev, Ferruh Yigit
ethdev has two interfaces, one interface between applications and
library, these APIs are declared in the ethdev.h public header.
Other interface is between drivers and library, these functions are
declared in ethdev_driver.h and marked as internal.
But all functions are defined in rte_ethdev.c file. This patch moves
functions for drivers to its own file, ethdev_driver.c for cleanup, no
functional change in functions.
Some public APIs and driver APIs call common internal functions, which
were mostly static since both were in same file. To be able to move
driver APIs, common functions are moved into ethdev_private.c.
(ethdev_private.c is used for functions that are internal to the library
and shared by multiple .c files in the ethdev library.)
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.c | 131 ++++++
lib/ethdev/ethdev_private.h | 36 ++
lib/ethdev/rte_ethdev.c | 901 ------------------------------------
4 files changed, 925 insertions(+), 901 deletions(-)
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index fb7323f4d327..9334e2a67650 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -2,7 +2,633 @@
* Copyright(c) 2022 Intel Corporation
*/
+#include <rte_kvargs.h>
+#include <rte_malloc.h>
+
#include "ethdev_driver.h"
+#include "ethdev_private.h"
+
+/**
+ * A set of values to describe the possible states of a switch domain.
+ */
+enum rte_eth_switch_domain_state {
+ RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED
+};
+
+/**
+ * Array of switch domains available for allocation. Array is sized to
+ * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
+ * ethdev ports in a single process.
+ */
+static struct rte_eth_dev_switch {
+ enum rte_eth_switch_domain_state state;
+} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_dev *
+eth_dev_allocated(const char *name)
+{
+ uint16_t i;
+
+ RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (rte_eth_devices[i].data != NULL &&
+ strcmp(rte_eth_devices[i].data->name, name) == 0)
+ return &rte_eth_devices[i];
+ }
+ return NULL;
+}
+
+static uint16_t
+eth_dev_find_free_port(void)
+{
+ uint16_t i;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ /* Using shared name field to find a free port. */
+ if (eth_dev_shared_data->data[i].name[0] == '\0') {
+ RTE_ASSERT(rte_eth_devices[i].state ==
+ RTE_ETH_DEV_UNUSED);
+ return i;
+ }
+ }
+ return RTE_MAX_ETHPORTS;
+}
+
+static struct rte_eth_dev *
+eth_dev_get(uint16_t port_id)
+{
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
+
+ eth_dev->data = ð_dev_shared_data->data[port_id];
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocate(const char *name)
+{
+ uint16_t port_id;
+ struct rte_eth_dev *eth_dev = NULL;
+ size_t name_len;
+
+ name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
+ if (name_len == 0) {
+ RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
+ return NULL;
+ }
+
+ if (name_len >= RTE_ETH_NAME_MAX_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
+ return NULL;
+ }
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port creation between primary and secondary threads. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ if (eth_dev_allocated(name) != NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethernet device with name %s already allocated\n",
+ name);
+ goto unlock;
+ }
+
+ port_id = eth_dev_find_free_port();
+ if (port_id == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Reached maximum number of Ethernet ports\n");
+ goto unlock;
+ }
+
+ eth_dev = eth_dev_get(port_id);
+ strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
+ eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
+ eth_dev->data->mtu = RTE_ETHER_MTU;
+ pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
+
+unlock:
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocated(const char *name)
+{
+ struct rte_eth_dev *ethdev;
+
+ eth_dev_shared_data_prepare();
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ ethdev = eth_dev_allocated(name);
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return ethdev;
+}
+
+/*
+ * Attach to a port already registered by the primary process, which
+ * makes sure that the same device would have the same port ID both
+ * in the primary and secondary process.
+ */
+struct rte_eth_dev *
+rte_eth_dev_attach_secondary(const char *name)
+{
+ uint16_t i;
+ struct rte_eth_dev *eth_dev = NULL;
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port attachment to primary port creation and release. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
+ break;
+ }
+ if (i == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device %s is not driven by the primary process\n",
+ name);
+ } else {
+ eth_dev = eth_dev_get(i);
+ RTE_ASSERT(eth_dev->data->port_id == i);
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+ return eth_dev;
+}
+
+int
+rte_eth_dev_callback_process(struct rte_eth_dev *dev,
+ enum rte_eth_event_type event, void *ret_param)
+{
+ struct rte_eth_dev_callback *cb_lst;
+ struct rte_eth_dev_callback dev_cb;
+ int rc = 0;
+
+ rte_spinlock_lock(ð_dev_cb_lock);
+ TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+ if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+ continue;
+ dev_cb = *cb_lst;
+ cb_lst->active = 1;
+ if (ret_param != NULL)
+ dev_cb.ret_param = ret_param;
+
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
+ dev_cb.cb_arg, dev_cb.ret_param);
+ rte_spinlock_lock(ð_dev_cb_lock);
+ cb_lst->active = 0;
+ }
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ return rc;
+}
+
+void
+rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
+{
+ if (dev == NULL)
+ return;
+
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for fast-path devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
+
+ dev->state = RTE_ETH_DEV_ATTACHED;
+}
+
+int
+rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
+{
+ if (eth_dev == NULL)
+ return -EINVAL;
+
+ eth_dev_shared_data_prepare();
+
+ if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_DESTROY, NULL);
+
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ eth_dev->state = RTE_ETH_DEV_UNUSED;
+ eth_dev->device = NULL;
+ eth_dev->process_private = NULL;
+ eth_dev->intr_handle = NULL;
+ eth_dev->rx_pkt_burst = NULL;
+ eth_dev->tx_pkt_burst = NULL;
+ eth_dev->tx_pkt_prepare = NULL;
+ eth_dev->rx_queue_count = NULL;
+ eth_dev->rx_descriptor_status = NULL;
+ eth_dev->tx_descriptor_status = NULL;
+ eth_dev->dev_ops = NULL;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ rte_free(eth_dev->data->rx_queues);
+ rte_free(eth_dev->data->tx_queues);
+ rte_free(eth_dev->data->mac_addrs);
+ rte_free(eth_dev->data->hash_mac_addrs);
+ rte_free(eth_dev->data->dev_private);
+ pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
+ memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return 0;
+}
+
+int
+rte_eth_dev_create(struct rte_device *device, const char *name,
+ size_t priv_data_size,
+ ethdev_bus_specific_init ethdev_bus_specific_init,
+ void *bus_init_params,
+ ethdev_init_t ethdev_init, void *init_params)
+{
+ struct rte_eth_dev *ethdev;
+ int retval;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ethdev = rte_eth_dev_allocate(name);
+ if (!ethdev)
+ return -ENODEV;
+
+ if (priv_data_size) {
+ ethdev->data->dev_private = rte_zmalloc_socket(
+ name, priv_data_size, RTE_CACHE_LINE_SIZE,
+ device->numa_node);
+
+ if (!ethdev->data->dev_private) {
+ RTE_ETHDEV_LOG(ERR,
+ "failed to allocate private data\n");
+ retval = -ENOMEM;
+ goto probe_failed;
+ }
+ }
+ } else {
+ ethdev = rte_eth_dev_attach_secondary(name);
+ if (!ethdev) {
+ RTE_ETHDEV_LOG(ERR,
+ "secondary process attach failed, ethdev doesn't exist\n");
+ return -ENODEV;
+ }
+ }
+
+ ethdev->device = device;
+
+ if (ethdev_bus_specific_init) {
+ retval = ethdev_bus_specific_init(ethdev, bus_init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR,
+ "ethdev bus specific initialisation failed\n");
+ goto probe_failed;
+ }
+ }
+
+ retval = ethdev_init(ethdev, init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
+ goto probe_failed;
+ }
+
+ rte_eth_dev_probing_finish(ethdev);
+
+ return retval;
+
+probe_failed:
+ rte_eth_dev_release_port(ethdev);
+ return retval;
+}
+
+int
+rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
+ ethdev_uninit_t ethdev_uninit)
+{
+ int ret;
+
+ ethdev = rte_eth_dev_allocated(ethdev->data->name);
+ if (!ethdev)
+ return -ENODEV;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
+
+ ret = ethdev_uninit(ethdev);
+ if (ret)
+ return ret;
+
+ return rte_eth_dev_release_port(ethdev);
+}
+
+struct rte_eth_dev *
+rte_eth_dev_get_by_name(const char *name)
+{
+ uint16_t pid;
+
+ if (rte_eth_dev_get_port_by_name(name, &pid))
+ return NULL;
+
+ return &rte_eth_devices[pid];
+}
+
+int
+rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+int
+rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+void
+rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_started) {
+ RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
+ dev->data->port_id);
+ return;
+ }
+
+ eth_dev_rx_queue_config(dev, 0);
+ eth_dev_tx_queue_config(dev, 0);
+
+ memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
+}
+
+static int
+eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
+{
+ int state;
+ struct rte_kvargs_pair *pair;
+ char *letter;
+
+ arglist->str = strdup(str_in);
+ if (arglist->str == NULL)
+ return -ENOMEM;
+
+ letter = arglist->str;
+ state = 0;
+ arglist->count = 0;
+ pair = &arglist->pairs[0];
+ while (1) {
+ switch (state) {
+ case 0: /* Initial */
+ if (*letter == '=')
+ return -EINVAL;
+ else if (*letter == '\0')
+ return 0;
+
+ state = 1;
+ pair->key = letter;
+ /* fallthrough */
+
+ case 1: /* Parsing key */
+ if (*letter == '=') {
+ *letter = '\0';
+ pair->value = letter + 1;
+ state = 2;
+ } else if (*letter == ',' || *letter == '\0')
+ return -EINVAL;
+ break;
+
+
+ case 2: /* Parsing value */
+ if (*letter == '[')
+ state = 3;
+ else if (*letter == ',') {
+ *letter = '\0';
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ } else if (*letter == '\0') {
+ letter--;
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ }
+ break;
+
+ case 3: /* Parsing list */
+ if (*letter == ']')
+ state = 2;
+ else if (*letter == '\0')
+ return -EINVAL;
+ break;
+ }
+ letter++;
+ }
+}
+
+int
+rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
+{
+ struct rte_kvargs args;
+ struct rte_kvargs_pair *pair;
+ unsigned int i;
+ int result = 0;
+
+ memset(eth_da, 0, sizeof(*eth_da));
+
+ result = eth_dev_devargs_tokenise(&args, dargs);
+ if (result < 0)
+ goto parse_cleanup;
+
+ for (i = 0; i < args.count; i++) {
+ pair = &args.pairs[i];
+ if (strcmp("representor", pair->key) == 0) {
+ if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
+ RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
+ dargs);
+ result = -1;
+ goto parse_cleanup;
+ }
+ result = rte_eth_devargs_parse_representor_ports(
+ pair->value, eth_da);
+ if (result < 0)
+ goto parse_cleanup;
+ }
+ }
+
+parse_cleanup:
+ if (args.str)
+ free(args.str);
+
+ return result;
+}
+
+static inline int
+eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
+ const char *ring_name)
+{
+ return snprintf(name, len, "eth_p%d_q%d_%s",
+ port_id, queue_id, ring_name);
+}
+
+int
+rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc = 0;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ return -ENAMETOOLONG;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz)
+ rc = rte_memzone_free(mz);
+ else
+ rc = -ENOENT;
+
+ return rc;
+}
+
+const struct rte_memzone *
+rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id, size_t size, unsigned int align,
+ int socket_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz) {
+ if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
+ size > mz->len ||
+ ((uintptr_t)mz->addr & (align - 1)) != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "memzone %s does not justify the requested attributes\n",
+ mz->name);
+ return NULL;
+ }
+
+ return mz;
+ }
+
+ return rte_memzone_reserve_aligned(z_name, size, socket_id,
+ RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+int
+rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
+ peer_info, direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
+ direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
+ struct rte_hairpin_peer_info *cur_info,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* Current queue information is not mandatory. */
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[peer_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
+ cur_info, peer_info, direction);
+}
+
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
+ .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
uint16_t
rte_eth_pkt_burst_dummy(void *queue __rte_unused,
@@ -11,3 +637,135 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
{
return 0;
}
+
+int
+rte_eth_representor_id_get(uint16_t port_id,
+ enum rte_eth_representor_type type,
+ int controller, int pf, int representor_port,
+ uint16_t *repr_id)
+{
+ int ret, n, count;
+ uint32_t i;
+ struct rte_eth_representor_info *info = NULL;
+ size_t size;
+
+ if (type == RTE_ETH_REPRESENTOR_NONE)
+ return 0;
+ if (repr_id == NULL)
+ return -EINVAL;
+
+ /* Get PMD representor range info. */
+ ret = rte_eth_representor_info_get(port_id, NULL);
+ if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
+ controller == -1 && pf == -1) {
+ /* Direct mapping for legacy VF representor. */
+ *repr_id = representor_port;
+ return 0;
+ } else if (ret < 0) {
+ return ret;
+ }
+ n = ret;
+ size = sizeof(*info) + n * sizeof(info->ranges[0]);
+ info = calloc(1, size);
+ if (info == NULL)
+ return -ENOMEM;
+ info->nb_ranges_alloc = n;
+ ret = rte_eth_representor_info_get(port_id, info);
+ if (ret < 0)
+ goto out;
+
+ /* Default controller and pf to caller. */
+ if (controller == -1)
+ controller = info->controller;
+ if (pf == -1)
+ pf = info->pf;
+
+ /* Locate representor ID. */
+ ret = -ENOENT;
+ for (i = 0; i < info->nb_ranges; ++i) {
+ if (info->ranges[i].type != type)
+ continue;
+ if (info->ranges[i].controller != controller)
+ continue;
+ if (info->ranges[i].id_end < info->ranges[i].id_base) {
+ RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
+ port_id, info->ranges[i].id_base,
+ info->ranges[i].id_end, i);
+ continue;
+
+ }
+ count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
+ switch (info->ranges[i].type) {
+ case RTE_ETH_REPRESENTOR_PF:
+ if (pf < info->ranges[i].pf ||
+ pf >= info->ranges[i].pf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (pf - info->ranges[i].pf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_VF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].vf ||
+ representor_port >= info->ranges[i].vf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].vf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_SF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].sf ||
+ representor_port >= info->ranges[i].sf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].sf);
+ ret = 0;
+ goto out;
+ default:
+ break;
+ }
+ }
+out:
+ free(info);
+ return ret;
+}
+
+int
+rte_eth_switch_domain_alloc(uint16_t *domain_id)
+{
+ uint16_t i;
+
+ *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (eth_dev_switch_domains[i].state ==
+ RTE_ETH_SWITCH_DOMAIN_UNUSED) {
+ eth_dev_switch_domains[i].state =
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
+ *domain_id = i;
+ return 0;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+int
+rte_eth_switch_domain_free(uint16_t domain_id)
+{
+ if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
+ domain_id >= RTE_MAX_ETHPORTS)
+ return -EINVAL;
+
+ if (eth_dev_switch_domains[domain_id].state !=
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
+ return -EINVAL;
+
+ eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
+
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 8fca20c7d45b..84dc0b320ed0 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -3,10 +3,22 @@
*/
#include <rte_debug.h>
+
#include "rte_ethdev.h"
#include "ethdev_driver.h"
#include "ethdev_private.h"
+static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
+
+/* Shared memory between primary and secondary processes. */
+struct eth_dev_shared *eth_dev_shared_data;
+
+/* spinlock for shared data allocation */
+static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
+
+/* spinlock for eth device callbacks */
+rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
uint16_t
eth_dev_to_id(const struct rte_eth_dev *dev)
{
@@ -302,3 +314,122 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
return nb_pkts;
}
+
+void
+eth_dev_shared_data_prepare(void)
+{
+ const unsigned int flags = 0;
+ const struct rte_memzone *mz;
+
+ rte_spinlock_lock(ð_dev_shared_data_lock);
+
+ if (eth_dev_shared_data == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* Allocate port data and ownership shared memory. */
+ mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
+ sizeof(*eth_dev_shared_data),
+ rte_socket_id(), flags);
+ } else
+ mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
+ if (mz == NULL)
+ rte_panic("Cannot allocate ethdev shared data\n");
+
+ eth_dev_shared_data = mz->addr;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ eth_dev_shared_data->next_owner_id =
+ RTE_ETH_DEV_NO_OWNER + 1;
+ rte_spinlock_init(ð_dev_shared_data->ownership_lock);
+ memset(eth_dev_shared_data->data, 0,
+ sizeof(eth_dev_shared_data->data));
+ }
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data_lock);
+}
+
+void
+eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **rxq = dev->data->rx_queues;
+
+ if (rxq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->rx_queue_release != NULL)
+ (*dev->dev_ops->rx_queue_release)(dev, qid);
+ rxq[qid] = NULL;
+}
+
+void
+eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **txq = dev->data->tx_queues;
+
+ if (txq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->tx_queue_release != NULL)
+ (*dev->dev_ops->tx_queue_release)(dev, qid);
+ txq[qid] = NULL;
+}
+
+int
+eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_rx_queues;
+ unsigned int i;
+
+ if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
+ sizeof(dev->data->rx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->rx_queues == NULL) {
+ dev->data->nb_rx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ rte_free(dev->data->rx_queues);
+ dev->data->rx_queues = NULL;
+ }
+ dev->data->nb_rx_queues = nb_queues;
+ return 0;
+}
+
+int
+eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_tx_queues;
+ unsigned int i;
+
+ if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
+ sizeof(dev->data->tx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->tx_queues == NULL) {
+ dev->data->nb_tx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ rte_free(dev->data->tx_queues);
+ dev->data->tx_queues = NULL;
+ }
+ dev->data->nb_tx_queues = nb_queues;
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index cc91025e8d9b..cc9879907ce5 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -5,10 +5,38 @@
#ifndef _ETH_PRIVATE_H_
#define _ETH_PRIVATE_H_
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
#include <rte_os_shim.h>
#include "rte_ethdev.h"
+struct eth_dev_shared {
+ uint64_t next_owner_id;
+ rte_spinlock_t ownership_lock;
+ struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
+};
+
+extern struct eth_dev_shared *eth_dev_shared_data;
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_eth_dev_callback {
+ TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
+ rte_eth_dev_cb_fn cb_fn; /**< Callback address */
+ void *cb_arg; /**< Parameter for callback */
+ void *ret_param; /**< Return parameter */
+ enum rte_eth_event_type event; /**< Interrupt event type */
+ uint32_t active; /**< Callback is executing */
+};
+
+extern rte_spinlock_t eth_dev_cb_lock;
+
/*
* Convert rte_eth_dev pointer to port ID.
* NULL will be translated to RTE_MAX_ETHPORTS.
@@ -33,4 +61,12 @@ void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
const struct rte_eth_dev *dev);
+
+void eth_dev_shared_data_prepare(void);
+
+void eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid);
+void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid);
+int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 2a479bea2128..70c850a2f18a 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -30,7 +30,6 @@
#include <rte_errno.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_kvargs.h>
#include <rte_class.h>
#include <rte_ether.h>
#include <rte_telemetry.h>
@@ -41,37 +40,23 @@
#include "ethdev_profile.h"
#include "ethdev_private.h"
-static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
/* public fast-path API */
struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-/* spinlock for eth device callbacks */
-static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
-
/* spinlock for add/remove Rx callbacks */
static rte_spinlock_t eth_dev_rx_cb_lock = RTE_SPINLOCK_INITIALIZER;
/* spinlock for add/remove Tx callbacks */
static rte_spinlock_t eth_dev_tx_cb_lock = RTE_SPINLOCK_INITIALIZER;
-/* spinlock for shared data allocation */
-static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
-
/* store statistics names and its offset in stats structure */
struct rte_eth_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
unsigned offset;
};
-/* Shared memory between primary and secondary processes. */
-static struct {
- uint64_t next_owner_id;
- rte_spinlock_t ownership_lock;
- struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
-} *eth_dev_shared_data;
-
static const struct rte_eth_xstats_name_off eth_dev_stats_strings[] = {
{"rx_good_packets", offsetof(struct rte_eth_stats, ipackets)},
{"tx_good_packets", offsetof(struct rte_eth_stats, opackets)},
@@ -175,21 +160,6 @@ static const struct {
{RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP, "FLOW_SHARED_OBJECT_KEEP"},
};
-/**
- * The user application callback description.
- *
- * It contains callback address to be registered by user application,
- * the pointer to the parameters for callback, and the event type.
- */
-struct rte_eth_dev_callback {
- TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
- rte_eth_dev_cb_fn cb_fn; /**< Callback address */
- void *cb_arg; /**< Parameter for callback */
- void *ret_param; /**< Return parameter */
- enum rte_eth_event_type event; /**< Interrupt event type */
- uint32_t active; /**< Callback is executing */
-};
-
enum {
STAT_QMAP_TX = 0,
STAT_QMAP_RX
@@ -399,227 +369,12 @@ rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id)
rte_eth_devices[ref_port_id].device);
}
-static void
-eth_dev_shared_data_prepare(void)
-{
- const unsigned flags = 0;
- const struct rte_memzone *mz;
-
- rte_spinlock_lock(ð_dev_shared_data_lock);
-
- if (eth_dev_shared_data == NULL) {
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Allocate port data and ownership shared memory. */
- mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
- sizeof(*eth_dev_shared_data),
- rte_socket_id(), flags);
- } else
- mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
- if (mz == NULL)
- rte_panic("Cannot allocate ethdev shared data\n");
-
- eth_dev_shared_data = mz->addr;
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- eth_dev_shared_data->next_owner_id =
- RTE_ETH_DEV_NO_OWNER + 1;
- rte_spinlock_init(ð_dev_shared_data->ownership_lock);
- memset(eth_dev_shared_data->data, 0,
- sizeof(eth_dev_shared_data->data));
- }
- }
-
- rte_spinlock_unlock(ð_dev_shared_data_lock);
-}
-
static bool
eth_dev_is_allocated(const struct rte_eth_dev *ethdev)
{
return ethdev->data->name[0] != '\0';
}
-static struct rte_eth_dev *
-eth_dev_allocated(const char *name)
-{
- uint16_t i;
-
- RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].data != NULL &&
- strcmp(rte_eth_devices[i].data->name, name) == 0)
- return &rte_eth_devices[i];
- }
- return NULL;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocated(const char *name)
-{
- struct rte_eth_dev *ethdev;
-
- eth_dev_shared_data_prepare();
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- ethdev = eth_dev_allocated(name);
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return ethdev;
-}
-
-static uint16_t
-eth_dev_find_free_port(void)
-{
- uint16_t i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- /* Using shared name field to find a free port. */
- if (eth_dev_shared_data->data[i].name[0] == '\0') {
- RTE_ASSERT(rte_eth_devices[i].state ==
- RTE_ETH_DEV_UNUSED);
- return i;
- }
- }
- return RTE_MAX_ETHPORTS;
-}
-
-static struct rte_eth_dev *
-eth_dev_get(uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
-
- eth_dev->data = ð_dev_shared_data->data[port_id];
-
- return eth_dev;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocate(const char *name)
-{
- uint16_t port_id;
- struct rte_eth_dev *eth_dev = NULL;
- size_t name_len;
-
- name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
- if (name_len == 0) {
- RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
- return NULL;
- }
-
- if (name_len >= RTE_ETH_NAME_MAX_LEN) {
- RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
- return NULL;
- }
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port creation between primary and secondary threads. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- if (eth_dev_allocated(name) != NULL) {
- RTE_ETHDEV_LOG(ERR,
- "Ethernet device with name %s already allocated\n",
- name);
- goto unlock;
- }
-
- port_id = eth_dev_find_free_port();
- if (port_id == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Reached maximum number of Ethernet ports\n");
- goto unlock;
- }
-
- eth_dev = eth_dev_get(port_id);
- strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
- eth_dev->data->port_id = port_id;
- eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
- eth_dev->data->mtu = RTE_ETHER_MTU;
- pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
-
-unlock:
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return eth_dev;
-}
-
-/*
- * Attach to a port already registered by the primary process, which
- * makes sure that the same device would have the same port ID both
- * in the primary and secondary process.
- */
-struct rte_eth_dev *
-rte_eth_dev_attach_secondary(const char *name)
-{
- uint16_t i;
- struct rte_eth_dev *eth_dev = NULL;
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port attachment to primary port creation and release. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
- break;
- }
- if (i == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Device %s is not driven by the primary process\n",
- name);
- } else {
- eth_dev = eth_dev_get(i);
- RTE_ASSERT(eth_dev->data->port_id == i);
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
- return eth_dev;
-}
-
-int
-rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
-{
- if (eth_dev == NULL)
- return -EINVAL;
-
- eth_dev_shared_data_prepare();
-
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
- rte_eth_dev_callback_process(eth_dev,
- RTE_ETH_EVENT_DESTROY, NULL);
-
- eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- eth_dev->state = RTE_ETH_DEV_UNUSED;
- eth_dev->device = NULL;
- eth_dev->process_private = NULL;
- eth_dev->intr_handle = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- eth_dev->tx_pkt_prepare = NULL;
- eth_dev->rx_queue_count = NULL;
- eth_dev->rx_descriptor_status = NULL;
- eth_dev->tx_descriptor_status = NULL;
- eth_dev->dev_ops = NULL;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- rte_free(eth_dev->data->rx_queues);
- rte_free(eth_dev->data->tx_queues);
- rte_free(eth_dev->data->mac_addrs);
- rte_free(eth_dev->data->hash_mac_addrs);
- rte_free(eth_dev->data->dev_private);
- pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
- memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return 0;
-}
-
int
rte_eth_dev_is_valid_port(uint16_t port_id)
{
@@ -894,17 +649,6 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id)
return -ENODEV;
}
-struct rte_eth_dev *
-rte_eth_dev_get_by_name(const char *name)
-{
- uint16_t pid;
-
- if (rte_eth_dev_get_port_by_name(name, &pid))
- return NULL;
-
- return &rte_eth_devices[pid];
-}
-
static int
eth_err(uint16_t port_id, int ret)
{
@@ -915,62 +659,6 @@ eth_err(uint16_t port_id, int ret)
return ret;
}
-static void
-eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **rxq = dev->data->rx_queues;
-
- if (rxq[qid] == NULL)
- return;
-
- if (dev->dev_ops->rx_queue_release != NULL)
- (*dev->dev_ops->rx_queue_release)(dev, qid);
- rxq[qid] = NULL;
-}
-
-static void
-eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **txq = dev->data->tx_queues;
-
- if (txq[qid] == NULL)
- return;
-
- if (dev->dev_ops->tx_queue_release != NULL)
- (*dev->dev_ops->tx_queue_release)(dev, qid);
- txq[qid] = NULL;
-}
-
-static int
-eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_rx_queues;
- unsigned i;
-
- if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
- sizeof(dev->data->rx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
- dev->data->nb_rx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- rte_free(dev->data->rx_queues);
- dev->data->rx_queues = NULL;
- }
- dev->data->nb_rx_queues = nb_queues;
- return 0;
-}
-
static int
eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
@@ -1161,36 +849,6 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
return eth_err(port_id, dev->dev_ops->tx_queue_stop(dev, tx_queue_id));
}
-static int
-eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_tx_queues;
- unsigned i;
-
- if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
- sizeof(dev->data->tx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
- dev->data->nb_tx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- rte_free(dev->data->tx_queues);
- dev->data->tx_queues = NULL;
- }
- dev->data->nb_tx_queues = nb_queues;
- return 0;
-}
-
uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
@@ -1682,21 +1340,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return ret;
}
-void
-rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
-{
- if (dev->data->dev_started) {
- RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
- dev->data->port_id);
- return;
- }
-
- eth_dev_rx_queue_config(dev, 0);
- eth_dev_tx_queue_config(dev, 0);
-
- memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
-}
-
static void
eth_dev_mac_restore(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -4914,52 +4557,6 @@ rte_eth_dev_callback_unregister(uint16_t port_id,
return ret;
}
-int
-rte_eth_dev_callback_process(struct rte_eth_dev *dev,
- enum rte_eth_event_type event, void *ret_param)
-{
- struct rte_eth_dev_callback *cb_lst;
- struct rte_eth_dev_callback dev_cb;
- int rc = 0;
-
- rte_spinlock_lock(ð_dev_cb_lock);
- TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
- if (cb_lst->cb_fn == NULL || cb_lst->event != event)
- continue;
- dev_cb = *cb_lst;
- cb_lst->active = 1;
- if (ret_param != NULL)
- dev_cb.ret_param = ret_param;
-
- rte_spinlock_unlock(ð_dev_cb_lock);
- rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
- dev_cb.cb_arg, dev_cb.ret_param);
- rte_spinlock_lock(ð_dev_cb_lock);
- cb_lst->active = 0;
- }
- rte_spinlock_unlock(ð_dev_cb_lock);
- return rc;
-}
-
-void
-rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
-{
- if (dev == NULL)
- return;
-
- /*
- * for secondary process, at that point we expect device
- * to be already 'usable', so shared data and all function pointers
- * for fast-path devops have to be setup properly inside rte_eth_dev.
- */
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
-
- rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
-
- dev->state = RTE_ETH_DEV_ATTACHED;
-}
-
int
rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
{
@@ -5032,156 +4629,6 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
return fd;
}
-static inline int
-eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
- const char *ring_name)
-{
- return snprintf(name, len, "eth_p%d_q%d_%s",
- port_id, queue_id, ring_name);
-}
-
-const struct rte_memzone *
-rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id, size_t size, unsigned align,
- int socket_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- rte_errno = ENAMETOOLONG;
- return NULL;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz) {
- if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
- size > mz->len ||
- ((uintptr_t)mz->addr & (align - 1)) != 0) {
- RTE_ETHDEV_LOG(ERR,
- "memzone %s does not justify the requested attributes\n",
- mz->name);
- return NULL;
- }
-
- return mz;
- }
-
- return rte_memzone_reserve_aligned(z_name, size, socket_id,
- RTE_MEMZONE_IOVA_CONTIG, align);
-}
-
-int
-rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc = 0;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- return -ENAMETOOLONG;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz)
- rc = rte_memzone_free(mz);
- else
- rc = -ENOENT;
-
- return rc;
-}
-
-int
-rte_eth_dev_create(struct rte_device *device, const char *name,
- size_t priv_data_size,
- ethdev_bus_specific_init ethdev_bus_specific_init,
- void *bus_init_params,
- ethdev_init_t ethdev_init, void *init_params)
-{
- struct rte_eth_dev *ethdev;
- int retval;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- ethdev = rte_eth_dev_allocate(name);
- if (!ethdev)
- return -ENODEV;
-
- if (priv_data_size) {
- ethdev->data->dev_private = rte_zmalloc_socket(
- name, priv_data_size, RTE_CACHE_LINE_SIZE,
- device->numa_node);
-
- if (!ethdev->data->dev_private) {
- RTE_ETHDEV_LOG(ERR,
- "failed to allocate private data\n");
- retval = -ENOMEM;
- goto probe_failed;
- }
- }
- } else {
- ethdev = rte_eth_dev_attach_secondary(name);
- if (!ethdev) {
- RTE_ETHDEV_LOG(ERR,
- "secondary process attach failed, ethdev doesn't exist\n");
- return -ENODEV;
- }
- }
-
- ethdev->device = device;
-
- if (ethdev_bus_specific_init) {
- retval = ethdev_bus_specific_init(ethdev, bus_init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR,
- "ethdev bus specific initialisation failed\n");
- goto probe_failed;
- }
- }
-
- retval = ethdev_init(ethdev, init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
- goto probe_failed;
- }
-
- rte_eth_dev_probing_finish(ethdev);
-
- return retval;
-
-probe_failed:
- rte_eth_dev_release_port(ethdev);
- return retval;
-}
-
-int
-rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
- ethdev_uninit_t ethdev_uninit)
-{
- int ret;
-
- ethdev = rte_eth_dev_allocated(ethdev->data->name);
- if (!ethdev)
- return -ENODEV;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
-
- ret = ethdev_uninit(ethdev);
- if (ret)
- return ret;
-
- return rte_eth_dev_release_port(ethdev);
-}
-
int
rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
int epfd, int op, void *data)
@@ -6005,22 +5452,6 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id,
return eth_err(port_id, (*dev->dev_ops->hairpin_cap_get)(dev, cap));
}
-int
-rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
-int
-rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
int
rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
{
@@ -6042,255 +5473,6 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
return (*dev->dev_ops->pool_ops_supported)(dev, pool);
}
-/**
- * A set of values to describe the possible states of a switch domain.
- */
-enum rte_eth_switch_domain_state {
- RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED
-};
-
-/**
- * Array of switch domains available for allocation. Array is sized to
- * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
- * ethdev ports in a single process.
- */
-static struct rte_eth_dev_switch {
- enum rte_eth_switch_domain_state state;
-} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
-
-int
-rte_eth_switch_domain_alloc(uint16_t *domain_id)
-{
- uint16_t i;
-
- *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (eth_dev_switch_domains[i].state ==
- RTE_ETH_SWITCH_DOMAIN_UNUSED) {
- eth_dev_switch_domains[i].state =
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
- *domain_id = i;
- return 0;
- }
- }
-
- return -ENOSPC;
-}
-
-int
-rte_eth_switch_domain_free(uint16_t domain_id)
-{
- if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
- domain_id >= RTE_MAX_ETHPORTS)
- return -EINVAL;
-
- if (eth_dev_switch_domains[domain_id].state !=
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
- return -EINVAL;
-
- eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
-
- return 0;
-}
-
-static int
-eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
-{
- int state;
- struct rte_kvargs_pair *pair;
- char *letter;
-
- arglist->str = strdup(str_in);
- if (arglist->str == NULL)
- return -ENOMEM;
-
- letter = arglist->str;
- state = 0;
- arglist->count = 0;
- pair = &arglist->pairs[0];
- while (1) {
- switch (state) {
- case 0: /* Initial */
- if (*letter == '=')
- return -EINVAL;
- else if (*letter == '\0')
- return 0;
-
- state = 1;
- pair->key = letter;
- /* fall-thru */
-
- case 1: /* Parsing key */
- if (*letter == '=') {
- *letter = '\0';
- pair->value = letter + 1;
- state = 2;
- } else if (*letter == ',' || *letter == '\0')
- return -EINVAL;
- break;
-
-
- case 2: /* Parsing value */
- if (*letter == '[')
- state = 3;
- else if (*letter == ',') {
- *letter = '\0';
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- } else if (*letter == '\0') {
- letter--;
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- }
- break;
-
- case 3: /* Parsing list */
- if (*letter == ']')
- state = 2;
- else if (*letter == '\0')
- return -EINVAL;
- break;
- }
- letter++;
- }
-}
-
-int
-rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
-{
- struct rte_kvargs args;
- struct rte_kvargs_pair *pair;
- unsigned int i;
- int result = 0;
-
- memset(eth_da, 0, sizeof(*eth_da));
-
- result = eth_dev_devargs_tokenise(&args, dargs);
- if (result < 0)
- goto parse_cleanup;
-
- for (i = 0; i < args.count; i++) {
- pair = &args.pairs[i];
- if (strcmp("representor", pair->key) == 0) {
- if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
- RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
- dargs);
- result = -1;
- goto parse_cleanup;
- }
- result = rte_eth_devargs_parse_representor_ports(
- pair->value, eth_da);
- if (result < 0)
- goto parse_cleanup;
- }
- }
-
-parse_cleanup:
- if (args.str)
- free(args.str);
-
- return result;
-}
-
-int
-rte_eth_representor_id_get(uint16_t port_id,
- enum rte_eth_representor_type type,
- int controller, int pf, int representor_port,
- uint16_t *repr_id)
-{
- int ret, n, count;
- uint32_t i;
- struct rte_eth_representor_info *info = NULL;
- size_t size;
-
- if (type == RTE_ETH_REPRESENTOR_NONE)
- return 0;
- if (repr_id == NULL)
- return -EINVAL;
-
- /* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(port_id, NULL);
- if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
- controller == -1 && pf == -1) {
- /* Direct mapping for legacy VF representor. */
- *repr_id = representor_port;
- return 0;
- } else if (ret < 0) {
- return ret;
- }
- n = ret;
- size = sizeof(*info) + n * sizeof(info->ranges[0]);
- info = calloc(1, size);
- if (info == NULL)
- return -ENOMEM;
- info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(port_id, info);
- if (ret < 0)
- goto out;
-
- /* Default controller and pf to caller. */
- if (controller == -1)
- controller = info->controller;
- if (pf == -1)
- pf = info->pf;
-
- /* Locate representor ID. */
- ret = -ENOENT;
- for (i = 0; i < info->nb_ranges; ++i) {
- if (info->ranges[i].type != type)
- continue;
- if (info->ranges[i].controller != controller)
- continue;
- if (info->ranges[i].id_end < info->ranges[i].id_base) {
- RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- port_id, info->ranges[i].id_base,
- info->ranges[i].id_end, i);
- continue;
-
- }
- count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
- switch (info->ranges[i].type) {
- case RTE_ETH_REPRESENTOR_PF:
- if (pf < info->ranges[i].pf ||
- pf >= info->ranges[i].pf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (pf - info->ranges[i].pf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_VF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].vf ||
- representor_port >= info->ranges[i].vf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].vf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_SF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].sf ||
- representor_port >= info->ranges[i].sf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].sf);
- ret = 0;
- goto out;
- default:
- break;
- }
- }
-out:
- free(info);
- return ret;
-}
-
static int
eth_dev_handle_port_list(const char *cmd __rte_unused,
const char *params __rte_unused,
@@ -6533,61 +5715,6 @@ eth_dev_handle_port_info(const char *cmd __rte_unused,
return 0;
}
-int
-rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
- struct rte_hairpin_peer_info *cur_info,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* Current queue information is not mandatory. */
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[peer_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
- cur_info, peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
- peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
- direction);
-}
-
int
rte_eth_representor_info_get(uint16_t port_id,
struct rte_eth_representor_info *info)
@@ -6722,34 +5849,6 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
-int
-rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
-{
- static const struct rte_mbuf_dynfield field_desc = {
- .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
- .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
- .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
- };
- static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
- .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
- };
- int offset;
-
- offset = rte_mbuf_dynfield_register(&field_desc);
- if (offset < 0)
- return -1;
- if (field_offset != NULL)
- *field_offset = offset;
-
- offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
- if (offset < 0)
- return -1;
- if (flag_offset != NULL)
- *flag_offset = offset;
-
- return 0;
-}
-
int
rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
{
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 2/2] ethdev: move driver interface functions to its own file
2022-02-11 18:09 ` Thomas Monjalon
@ 2022-02-11 18:39 ` Ferruh Yigit
0 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 18:39 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Andrew Rybchenko, Anatoly Burakov, dev
On 2/11/2022 6:09 PM, Thomas Monjalon wrote:
> 11/02/2022 18:14, Ferruh Yigit:
>> Relevant functions moved to ethdev_driver.c.
>> No functional change.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++
>> lib/ethdev/ethdev_private.c | 131 ++++++
>> lib/ethdev/ethdev_private.h | 36 ++
>> lib/ethdev/rte_ethdev.c | 901 ------------------------------------
>> 4 files changed, 925 insertions(+), 901 deletions(-)
>
> Please could you add more details in the commit log while merging?
> We need to know that they are internal functions used only by drivers.
> Also it would be interesting to explain the difference between
> ethdev_driver.c and ethdev_private.h.
>
I put some more description in v4, can you please check if it is good?
> With this info,
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 2/2] ethdev: move driver interface functions to its own file
2022-02-11 18:38 ` [PATCH v4 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
@ 2022-02-11 18:55 ` Thomas Monjalon
2022-02-11 19:01 ` Ferruh Yigit
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2022-02-11 18:55 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Andrew Rybchenko, Anatoly Burakov, dev
11/02/2022 19:38, Ferruh Yigit:
> ethdev has two interfaces, one interface between applications and
> library, these APIs are declared in the ethdev.h public header.
> Other interface is between drivers and library, these functions are
> declared in ethdev_driver.h and marked as internal.
>
> But all functions are defined in rte_ethdev.c file. This patch moves
> functions for drivers to its own file, ethdev_driver.c for cleanup, no
> functional change in functions.
>
> Some public APIs and driver APIs call common internal functions, which
here
> were mostly static since both were in same file. To be able to move
> driver APIs, common functions are moved into ethdev_private.c.
and there, "driver APIs" should be "driver helpers", right?
> (ethdev_private.c is used for functions that are internal to the library
> and shared by multiple .c files in the ethdev library.)
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 2/2] ethdev: move driver interface functions to its own file
2022-02-11 18:55 ` Thomas Monjalon
@ 2022-02-11 19:01 ` Ferruh Yigit
0 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 19:01 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Andrew Rybchenko, Anatoly Burakov, dev
On 2/11/2022 6:55 PM, Thomas Monjalon wrote:
> 11/02/2022 19:38, Ferruh Yigit:
>> ethdev has two interfaces, one interface between applications and
>> library, these APIs are declared in the ethdev.h public header.
will update file name as 'rte_ethdev.h'
>> Other interface is between drivers and library, these functions are
>> declared in ethdev_driver.h and marked as internal.
>>
>> But all functions are defined in rte_ethdev.c file. This patch moves
>> functions for drivers to its own file, ethdev_driver.c for cleanup, no
>> functional change in functions.
>>
>> Some public APIs and driver APIs call common internal functions, which
>
> here
>
>> were mostly static since both were in same file. To be able to move
>> driver APIs, common functions are moved into ethdev_private.c.
>
> and there, "driver APIs" should be "driver helpers", right?
>
I wasn't sure what to say them, I will update as "driver helpers"
>> (ethdev_private.c is used for functions that are internal to the library
>> and shared by multiple .c files in the ethdev library.)
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
` (5 preceding siblings ...)
2022-02-11 18:38 ` [PATCH v4 " Ferruh Yigit
@ 2022-02-11 19:11 ` Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 20:18 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
6 siblings, 2 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 19:11 UTC (permalink / raw)
To: Ciara Loftus, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, John Daley, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, Ray Kinsella
Cc: dev, Ferruh Yigit, Morten Brørup
Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
These dummy functions are very simple, introduce a common function in
the ethdev and update drivers to use it instead of each driver having
its own functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
Cc: Ciara Loftus <ciara.loftus@intel.com>
v2:
* Convert inline function to actual function in new ethdev_driver.c
file. This is because of functions pointer comparisons in PMDs.
PMD interface of ethdev can be moved to 'ethdev_driver.c' later.
v3:
* updated af_xdp too
v4:
* Commit log updated and checkpatch warning fixed
v5:
* Commit log updated, 'driver API' -> 'driver helper'
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 26 ++-------------
drivers/net/ark/ark_ethdev.c | 8 ++---
drivers/net/ark/ark_ethdev_rx.c | 9 -----
drivers/net/ark/ark_ethdev_rx.h | 2 --
drivers/net/ark/ark_ethdev_tx.c | 9 -----
drivers/net/ark/ark_ethdev_tx.h | 3 --
drivers/net/bnx2x/bnx2x_rxtx.c | 12 ++-----
drivers/net/bnxt/bnxt.h | 4 ---
drivers/net/bnxt/bnxt_cpr.c | 4 +--
drivers/net/bnxt/bnxt_rxr.c | 14 --------
drivers/net/bnxt/bnxt_txr.c | 14 --------
drivers/net/cnxk/cnxk_ethdev.c | 14 ++------
drivers/net/dpaa2/dpaa2_ethdev.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.h | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 25 --------------
drivers/net/enic/enic.h | 3 --
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 2 +-
drivers/net/enic/enic_rxtx.c | 11 ------
drivers/net/hns3/hns3_rxtx.c | 18 +++-------
drivers/net/hns3/hns3_rxtx.h | 3 --
drivers/net/mlx4/mlx4.c | 8 ++---
drivers/net/mlx4/mlx4_mp.c | 4 +--
drivers/net/mlx4/mlx4_rxtx.c | 52 -----------------------------
drivers/net/mlx4/mlx4_rxtx.h | 4 ---
drivers/net/mlx5/linux/mlx5_mp_os.c | 4 +--
drivers/net/mlx5/linux/mlx5_os.c | 4 +--
drivers/net/mlx5/mlx5.c | 4 +--
drivers/net/mlx5/mlx5_rx.c | 27 +--------------
drivers/net/mlx5/mlx5_rx.h | 2 --
drivers/net/mlx5/mlx5_trigger.c | 4 +--
drivers/net/mlx5/mlx5_tx.c | 25 --------------
drivers/net/mlx5/mlx5_tx.h | 2 --
drivers/net/mlx5/windows/mlx5_os.c | 4 +--
drivers/net/pfe/pfe_ethdev.c | 20 ++---------
drivers/net/qede/qede_ethdev.c | 4 +--
drivers/net/qede/qede_rxtx.c | 9 -----
drivers/net/qede/qede_rxtx.h | 3 --
lib/ethdev/ethdev_driver.c | 13 ++++++++
lib/ethdev/ethdev_driver.h | 17 ++++++++++
lib/ethdev/meson.build | 1 +
lib/ethdev/version.map | 1 +
42 files changed, 73 insertions(+), 325 deletions(-)
create mode 100644 lib/ethdev/ethdev_driver.c
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 4a37c11960e1..6ac710c6bdc6 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -1916,28 +1916,6 @@ afxdp_mp_send_fds(const struct rte_mp_msg *request, const void *peer)
return 0;
}
-/* Secondary process rx function. RX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_rx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
-/* Secondary process tx function. TX is disabled because memory mapping of the
- * rings being assigned by the kernel in the primary process only.
- */
-static uint16_t
-eth_af_xdp_tx_noop(void *queue __rte_unused,
- struct rte_mbuf **bufs __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
static int
rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
{
@@ -1961,8 +1939,8 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev)
}
eth_dev->dev_ops = &ops;
eth_dev->device = &dev->device;
- eth_dev->rx_pkt_burst = eth_af_xdp_rx_noop;
- eth_dev->tx_pkt_burst = eth_af_xdp_tx_noop;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->process_private = (struct pmd_process_private *)
rte_zmalloc_socket(name,
sizeof(struct pmd_process_private),
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3f023..230a1272e986 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -271,8 +271,8 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Use dummy function until setup */
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
@@ -605,8 +605,8 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
if (ark->start_pg)
ark_pktgen_pause(ark->pg);
- dev->rx_pkt_burst = ð_ark_recv_pkts_noop;
- dev->tx_pkt_burst = ð_ark_xmit_pkts_noop;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* STOP TX Side */
for (i = 0; i < dev->data->nb_tx_queues; i++) {
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index 98658ce621e2..37a88cbedee4 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -228,15 +228,6 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_recv_pkts_noop(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 859fcf1e6f71..f64b3dd137b3 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -20,8 +20,6 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
-uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
void eth_ark_dev_rx_queue_release(void *rx_queue);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 676e4115d3bf..abdce6a8cc0d 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -105,15 +105,6 @@ eth_ark_tx_desc_fill(struct ark_tx_queue *queue,
}
-/* ************************************************************************* */
-uint16_t
-eth_ark_xmit_pkts_noop(void *vtxq __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
/* ************************************************************************* */
uint16_t
eth_ark_xmit_pkts(void *vtxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 12c71a7158a9..7134dbfeed81 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -10,9 +10,6 @@
#include <ethdev_driver.h>
-uint16_t eth_ark_xmit_pkts_noop(void *vtxq,
- struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
uint16_t eth_ark_xmit_pkts(void *vtxq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 66b0512c8695..cb5733c5972b 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -465,18 +465,10 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
-static uint16_t
-bnx2x_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
void bnx2x_dev_rxtx_init_dummy(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = bnx2x_rxtx_pkts_dummy;
- dev->tx_pkt_burst = bnx2x_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
}
void bnx2x_dev_rxtx_init(struct rte_eth_dev *dev)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0cbb58b2cf3e..44724a9dfe91 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -1014,10 +1014,6 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp);
int bnxt_link_update_op(struct rte_eth_dev *eth_dev,
int wait_to_complete);
-uint16_t bnxt_dummy_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
-uint16_t bnxt_dummy_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t nb_pkts);
extern const struct rte_flow_ops bnxt_flow_ops;
diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 9b9285b79903..99af0f9e87ee 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -408,8 +408,8 @@ bool bnxt_is_recovery_enabled(struct bnxt *bp)
void bnxt_stop_rxtx(struct rte_eth_dev *eth_dev)
{
- eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b60c2470f39e..5a9cf48e6739 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1147,20 +1147,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx_pkts;
}
-/*
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_recv_pkts(void *rx_queue __rte_unused,
- struct rte_mbuf **rx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
void bnxt_free_rx_rings(struct bnxt *bp)
{
int i;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 3b8f2382f92e..7a7196a23731 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -527,20 +527,6 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx_pkts;
}
-/*
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- */
-uint16_t
-bnxt_dummy_xmit_pkts(void *tx_queue __rte_unused,
- struct rte_mbuf **tx_pkts __rte_unused,
- uint16_t nb_pkts __rte_unused)
-{
- return 0;
-}
-
int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
struct bnxt *bp = dev->data->dev_private;
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 53dfb5eae80e..c6a9ada05bb4 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -942,16 +942,6 @@ nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
return rc;
}
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
static void
nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
@@ -962,8 +952,8 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
* which caused app crash since rx/tx burst is still
* on different lcores
*/
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
}
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 379daec5f4e8..5be4fef8fe68 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2005,7 +2005,7 @@ dpaa2_dev_set_link_down(struct rte_eth_dev *dev)
}
/*changing tx burst function to avoid any more enqueues */
- dev->tx_pkt_burst = dummy_dev_tx;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
/* Loop while dpni_disable() attempts to drain the egress FQs
* and confirm them back to us.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 1b49f43103a7..e79a7fc2e286 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -264,7 +264,6 @@ __rte_internal
uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue,
struct rte_mbuf **bufs, uint16_t nb_pkts);
-uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts);
void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci);
void dpaa2_flow_clean(struct rte_eth_dev *dev);
uint16_t dpaa2_dev_tx_conf(void *queue) __rte_unused;
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 81b28e20cb47..b8844fbdf107 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1802,31 +1802,6 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
-{
- (void)queue;
- (void)bufs;
- (void)nb_pkts;
- return 0;
-}
-
#if defined(RTE_TOOLCHAIN_GCC)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index d5493c98345d..163a1f037e26 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -426,9 +426,6 @@ uint16_t enic_recv_pkts_64(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t enic_noscatter_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t enic_dummy_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
uint16_t enic_simple_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 163be09809b1..a8d470e8ac93 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -538,7 +538,7 @@ static const uint32_t *enicpmd_dev_supported_ptypes_get(struct rte_eth_dev *dev)
RTE_PTYPE_UNKNOWN
};
- if (dev->rx_pkt_burst != enic_dummy_recv_pkts &&
+ if (dev->rx_pkt_burst != rte_eth_pkt_burst_dummy &&
dev->rx_pkt_burst != NULL) {
struct enic *enic = pmd_priv(dev);
if (enic->overlay_offload)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 97d97ea793f2..9f351de72eb4 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1664,7 +1664,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
/* replace Rx function with a no-op to avoid getting stale pkts */
- eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_eth_fp_ops[enic->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 74a90694c718..7a66d72275d9 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -31,17 +31,6 @@
#define rte_packet_prefetch(p) do {} while (0)
#endif
-/* dummy receive function to replace actual function in
- * order to do safe reconfiguration operations.
- */
-uint16_t
-enic_dummy_recv_pkts(__rte_unused void *rx_queue,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static inline uint16_t
enic_recv_pkts_common(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts, const bool use_64b_desc)
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3b72c2375a60..8dc6cfac704d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4383,14 +4383,6 @@ hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
return hns3_xmit_pkts;
}
-uint16_t
-hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- return 0;
-}
-
static void
hns3_trace_rxtx_function(struct rte_eth_dev *dev)
{
@@ -4432,14 +4424,14 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev)
eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev);
eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status;
eth_dev->tx_pkt_burst = hw->set_link_down ?
- hns3_dummy_rxtx_burst :
+ rte_eth_pkt_burst_dummy :
hns3_get_tx_function(eth_dev, &prep);
eth_dev->tx_pkt_prepare = prep;
eth_dev->tx_descriptor_status = hns3_dev_tx_descriptor_status;
hns3_trace_rxtx_function(eth_dev);
} else {
- eth_dev->rx_pkt_burst = hns3_dummy_rxtx_burst;
- eth_dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_prepare = NULL;
}
@@ -4632,7 +4624,7 @@ hns3_tx_done_cleanup(void *txq, uint32_t free_cnt)
if (dev->tx_pkt_burst == hns3_xmit_pkts)
return hns3_tx_done_cleanup_full(q, free_cnt);
- else if (dev->tx_pkt_burst == hns3_dummy_rxtx_burst)
+ else if (dev->tx_pkt_burst == rte_eth_pkt_burst_dummy)
return 0;
else
return -ENOTSUP;
@@ -4742,7 +4734,7 @@ hns3_enable_rxd_adv_layout(struct hns3_hw *hw)
void
hns3_stop_tx_datapath(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = hns3_dummy_rxtx_burst;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
dev->tx_pkt_prepare = NULL;
hns3_eth_dev_fp_ops_config(dev);
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index 094b65b7de70..a000318357ab 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -729,9 +729,6 @@ void hns3_init_rx_ptype_tble(struct rte_eth_dev *dev);
void hns3_set_rxtx_function(struct rte_eth_dev *eth_dev);
eth_tx_burst_t hns3_get_tx_function(struct rte_eth_dev *dev,
eth_tx_prep_t *prep);
-uint16_t hns3_dummy_rxtx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused);
uint32_t hns3_get_tqp_intr_reg_offset(uint16_t tqp_intr_id);
void hns3_set_queue_intr_gl(struct hns3_hw *hw, uint16_t queue_id,
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 3f3c4a7c7214..910b76a92c42 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -350,8 +350,8 @@ mlx4_dev_stop(struct rte_eth_dev *dev)
return 0;
DEBUG("%p: detaching flows from all RX queues", (void *)dev);
priv->started = 0;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
@@ -383,8 +383,8 @@ mlx4_dev_close(struct rte_eth_dev *dev)
DEBUG("%p: closing device \"%s\"",
(void *)dev,
((priv->ctx != NULL) ? priv->ctx->device->name : ""));
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx4_mp_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx4/mlx4_mp.c b/drivers/net/mlx4/mlx4_mp.c
index 8fcfb5490ee9..1da64910aadd 100644
--- a/drivers/net/mlx4/mlx4_mp.c
+++ b/drivers/net/mlx4/mlx4_mp.c
@@ -150,8 +150,8 @@ mp_secondary_handle(const struct rte_mp_msg *mp_msg, const void *peer)
break;
case MLX4_MP_REQ_STOP_RXTX:
INFO("port %u stopping datapath", dev->data->port_id);
- dev->tx_pkt_burst = mlx4_tx_burst_removed;
- dev->rx_pkt_burst = mlx4_rx_burst_removed;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(dev, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ed9e41fcdea9..059e432a63fc 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -1338,55 +1338,3 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq->stats.ipackets += i;
return i;
}
-
-/**
- * Dummy DPDK callback for Tx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to Tx queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_txq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
-
-/**
- * Dummy DPDK callback for Rx.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to Rx queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
- (void)dpdk_rxq;
- (void)pkts;
- (void)pkts_n;
- rte_mb();
- return 0;
-}
diff --git a/drivers/net/mlx4/mlx4_rxtx.h b/drivers/net/mlx4/mlx4_rxtx.h
index 83e9534cd0a7..70f3cd868058 100644
--- a/drivers/net/mlx4/mlx4_rxtx.h
+++ b/drivers/net/mlx4/mlx4_rxtx.h
@@ -149,10 +149,6 @@ uint16_t mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
uint16_t pkts_n);
uint16_t mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t mlx4_tx_burst_removed(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
-uint16_t mlx4_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
/* mlx4_txq.c */
diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c b/drivers/net/mlx5/linux/mlx5_mp_os.c
index c448a3e9eb87..e607089e0e20 100644
--- a/drivers/net/mlx5/linux/mlx5_mp_os.c
+++ b/drivers/net/mlx5/linux/mlx5_mp_os.c
@@ -192,8 +192,8 @@ struct rte_mp_msg mp_res;
break;
case MLX5_MP_REQ_STOP_RXTX:
DRV_LOG(INFO, "port %u stopping datapath", dev->data->port_id);
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_mb();
mp_init_msg(&priv->mp_id, &mp_res, param->type);
res->result = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index aecdc5a68abb..bbe05bb837e0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1623,8 +1623,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 67eda41a60a5..5571e9067787 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1559,8 +1559,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_action_handle_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index f388fcc31395..11ea935d72f0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -252,7 +252,7 @@ mlx5_rx_queue_count(void *rx_queue)
dev = &rte_eth_devices[rxq->port_id];
if (dev->rx_pkt_burst == NULL ||
- dev->rx_pkt_burst == removed_rx_burst) {
+ dev->rx_pkt_burst == rte_eth_pkt_burst_dummy) {
rte_errno = ENOTSUP;
return -rte_errno;
}
@@ -1153,31 +1153,6 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
return i;
}
-/**
- * Dummy DPDK callback for RX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_rxq
- * Generic pointer to RX queue structure.
- * @param[out] pkts
- * Array to store received packets.
- * @param pkts_n
- * Maximum number of packets in array.
- *
- * @return
- * Number of packets successfully received (<= pkts_n).
- */
-uint16_t
-removed_rx_burst(void *dpdk_rxq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/*
* Vectorized Rx routines are not compiled in when required vector instructions
* are not supported on a target architecture.
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index cb5d51340db7..7e417819f7e8 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -275,8 +275,6 @@ __rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec);
void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf);
uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
uint16_t pkts_n);
-uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
uint32_t mlx5_rx_queue_count(void *rx_queue);
void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 74c9c0a4fff8..3a59237b1a7a 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1244,8 +1244,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
dev->data->dev_started = 0;
/* Prevent crashes when queues are still in use. */
- dev->rx_pkt_burst = removed_rx_burst;
- dev->tx_pkt_burst = removed_tx_burst;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_os_req_stop_rxtx(dev);
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf2096753..8453b2701a9f 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -135,31 +135,6 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
return 0;
}
-/**
- * Dummy DPDK callback for TX.
- *
- * This function is used to temporarily replace the real callback during
- * unsafe control operations on the queue, or in case of error.
- *
- * @param dpdk_txq
- * Generic pointer to TX queue structure.
- * @param[in] pkts
- * Packets to transmit.
- * @param pkts_n
- * Number of packets in array.
- *
- * @return
- * Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-removed_tx_burst(void *dpdk_txq __rte_unused,
- struct rte_mbuf **pkts __rte_unused,
- uint16_t pkts_n __rte_unused)
-{
- rte_mb();
- return 0;
-}
-
/**
* Update completion queue consuming index via doorbell
* and flush the completed data buffers.
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 398cadfeaa46..c4b8271f6fb3 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -221,8 +221,6 @@ void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev);
/* mlx5_tx.c */
-uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts,
- uint16_t pkts_n);
void mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx __rte_unused);
int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index ac0af0ff7d43..7f3532426f1f 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -574,8 +574,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
DRV_LOG(DEBUG, "port %u MTU is %u.", eth_dev->data->port_id,
priv->mtu);
/* Initialize burst functions to prevent crashes before link-up. */
- eth_dev->rx_pkt_burst = removed_rx_burst;
- eth_dev->tx_pkt_burst = removed_tx_burst;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->dev_ops = &mlx5_dev_ops;
eth_dev->rx_descriptor_status = mlx5_rx_descriptor_status;
eth_dev->tx_descriptor_status = mlx5_tx_descriptor_status;
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index edf32aa70da6..c2991ab1ccaa 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -235,22 +235,6 @@ pfe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkts;
}
-static uint16_t
-pfe_dummy_xmit_pkts(__rte_unused void *tx_queue,
- __rte_unused struct rte_mbuf **tx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-static uint16_t
-pfe_dummy_recv_pkts(__rte_unused void *rxq,
- __rte_unused struct rte_mbuf **rx_pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
static int
pfe_eth_open(struct rte_eth_dev *dev)
{
@@ -383,8 +367,8 @@ pfe_eth_stop(struct rte_eth_dev *dev/*, int wake*/)
gemac_disable(priv->EMAC_baseaddr);
gpi_disable(priv->GPI_baseaddr);
- dev->rx_pkt_burst = &pfe_dummy_recv_pkts;
- dev->tx_pkt_burst = &pfe_dummy_xmit_pkts;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return 0;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1122a297e6b..ea6b71f09355 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -322,8 +322,8 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
bool use_tx_offload = false;
if (is_dummy) {
- dev->rx_pkt_burst = qede_rxtx_pkts_dummy;
- dev->tx_pkt_burst = qede_rxtx_pkts_dummy;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
return;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 7088c57b501d..85784f4a82a6 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2734,15 +2734,6 @@ qede_xmit_pkts_cmt(void *p_fp_cmt, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return eng0_pkts + eng1_pkts;
}
-uint16_t
-qede_rxtx_pkts_dummy(__rte_unused void *p_rxq,
- __rte_unused struct rte_mbuf **pkts,
- __rte_unused uint16_t nb_pkts)
-{
- return 0;
-}
-
-
/* this function does a fake walk through over completion queue
* to calculate number of BDs used by HW.
* At the end, it restores the state of completion queue.
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 11ed1d9b9c50..013a4a07c716 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -272,9 +272,6 @@ uint16_t qede_recv_pkts_cmt(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t
qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
-uint16_t qede_rxtx_pkts_dummy(void *p_rxq,
- struct rte_mbuf **pkts,
- uint16_t nb_pkts);
int qede_start_queues(struct rte_eth_dev *eth_dev);
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
new file mode 100644
index 000000000000..fb7323f4d327
--- /dev/null
+++ b/lib/ethdev/ethdev_driver.c
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include "ethdev_driver.h"
+
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused)
+{
+ return 0;
+}
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 617b450d5763..8de8e1c67113 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1509,6 +1509,23 @@ rte_eth_linkstatus_get(const struct rte_eth_dev *dev,
*dst = __atomic_load_n(src, __ATOMIC_SEQ_CST);
}
+/**
+ * @internal
+ * Dummy DPDK callback for Rx/Tx packet burst.
+ *
+ * @param queue
+ * Pointer to Rx/Tx queue
+ * @param pkts
+ * Packet array
+ * @param nb_pkts
+ * Number of packets in packet array
+ */
+__rte_internal
+uint16_t
+rte_eth_pkt_burst_dummy(void *queue __rte_unused,
+ struct rte_mbuf **pkts __rte_unused,
+ uint16_t nb_pkts __rte_unused);
+
/**
* Allocate an unique switch domain identifier.
*
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 0205c853df53..a094585bf715 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -2,6 +2,7 @@
# Copyright(c) 2017 Intel Corporation
sources = files(
+ 'ethdev_driver.c',
'ethdev_private.c',
'ethdev_profile.c',
'ethdev_trace_points.c',
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 1a43282ce45d..d5cc56a56023 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -289,6 +289,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
rte_eth_ip_reassembly_dynfield_register;
+ rte_eth_pkt_burst_dummy;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v5 2/2] ethdev: move driver interface functions to its own file
2022-02-11 19:11 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
@ 2022-02-11 19:11 ` Ferruh Yigit
2022-02-11 20:18 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
1 sibling, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 19:11 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko, Anatoly Burakov; +Cc: dev, Ferruh Yigit
ethdev has two interfaces, one interface between applications and
library, these APIs are declared in the rte_ethdev.h public header.
Other interface is between drivers and library, these functions are
declared in ethdev_driver.h and marked as internal.
But all functions are defined in rte_ethdev.c file. This patch moves
functions for drivers to its own file, ethdev_driver.c for cleanup, no
functional change in functions.
Some public APIs and driver helpers call common internal functions,
which were mostly static since both were in same file. To be able to
move driver helpers, common functions are moved to ethdev_private.c.
(ethdev_private.c is used for functions that are internal to the library
and shared by multiple .c files in the ethdev library.)
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
lib/ethdev/ethdev_driver.c | 758 ++++++++++++++++++++++++++++++
lib/ethdev/ethdev_private.c | 131 ++++++
lib/ethdev/ethdev_private.h | 36 ++
lib/ethdev/rte_ethdev.c | 901 ------------------------------------
4 files changed, 925 insertions(+), 901 deletions(-)
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index fb7323f4d327..9334e2a67650 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -2,7 +2,633 @@
* Copyright(c) 2022 Intel Corporation
*/
+#include <rte_kvargs.h>
+#include <rte_malloc.h>
+
#include "ethdev_driver.h"
+#include "ethdev_private.h"
+
+/**
+ * A set of values to describe the possible states of a switch domain.
+ */
+enum rte_eth_switch_domain_state {
+ RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED
+};
+
+/**
+ * Array of switch domains available for allocation. Array is sized to
+ * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
+ * ethdev ports in a single process.
+ */
+static struct rte_eth_dev_switch {
+ enum rte_eth_switch_domain_state state;
+} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_dev *
+eth_dev_allocated(const char *name)
+{
+ uint16_t i;
+
+ RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (rte_eth_devices[i].data != NULL &&
+ strcmp(rte_eth_devices[i].data->name, name) == 0)
+ return &rte_eth_devices[i];
+ }
+ return NULL;
+}
+
+static uint16_t
+eth_dev_find_free_port(void)
+{
+ uint16_t i;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ /* Using shared name field to find a free port. */
+ if (eth_dev_shared_data->data[i].name[0] == '\0') {
+ RTE_ASSERT(rte_eth_devices[i].state ==
+ RTE_ETH_DEV_UNUSED);
+ return i;
+ }
+ }
+ return RTE_MAX_ETHPORTS;
+}
+
+static struct rte_eth_dev *
+eth_dev_get(uint16_t port_id)
+{
+ struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
+
+ eth_dev->data = ð_dev_shared_data->data[port_id];
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocate(const char *name)
+{
+ uint16_t port_id;
+ struct rte_eth_dev *eth_dev = NULL;
+ size_t name_len;
+
+ name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
+ if (name_len == 0) {
+ RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
+ return NULL;
+ }
+
+ if (name_len >= RTE_ETH_NAME_MAX_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
+ return NULL;
+ }
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port creation between primary and secondary threads. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ if (eth_dev_allocated(name) != NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethernet device with name %s already allocated\n",
+ name);
+ goto unlock;
+ }
+
+ port_id = eth_dev_find_free_port();
+ if (port_id == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Reached maximum number of Ethernet ports\n");
+ goto unlock;
+ }
+
+ eth_dev = eth_dev_get(port_id);
+ strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
+ eth_dev->data->port_id = port_id;
+ eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
+ eth_dev->data->mtu = RTE_ETHER_MTU;
+ pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
+
+unlock:
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return eth_dev;
+}
+
+struct rte_eth_dev *
+rte_eth_dev_allocated(const char *name)
+{
+ struct rte_eth_dev *ethdev;
+
+ eth_dev_shared_data_prepare();
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ ethdev = eth_dev_allocated(name);
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return ethdev;
+}
+
+/*
+ * Attach to a port already registered by the primary process, which
+ * makes sure that the same device would have the same port ID both
+ * in the primary and secondary process.
+ */
+struct rte_eth_dev *
+rte_eth_dev_attach_secondary(const char *name)
+{
+ uint16_t i;
+ struct rte_eth_dev *eth_dev = NULL;
+
+ eth_dev_shared_data_prepare();
+
+ /* Synchronize port attachment to primary port creation and release. */
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
+ break;
+ }
+ if (i == RTE_MAX_ETHPORTS) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device %s is not driven by the primary process\n",
+ name);
+ } else {
+ eth_dev = eth_dev_get(i);
+ RTE_ASSERT(eth_dev->data->port_id == i);
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+ return eth_dev;
+}
+
+int
+rte_eth_dev_callback_process(struct rte_eth_dev *dev,
+ enum rte_eth_event_type event, void *ret_param)
+{
+ struct rte_eth_dev_callback *cb_lst;
+ struct rte_eth_dev_callback dev_cb;
+ int rc = 0;
+
+ rte_spinlock_lock(ð_dev_cb_lock);
+ TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
+ if (cb_lst->cb_fn == NULL || cb_lst->event != event)
+ continue;
+ dev_cb = *cb_lst;
+ cb_lst->active = 1;
+ if (ret_param != NULL)
+ dev_cb.ret_param = ret_param;
+
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
+ dev_cb.cb_arg, dev_cb.ret_param);
+ rte_spinlock_lock(ð_dev_cb_lock);
+ cb_lst->active = 0;
+ }
+ rte_spinlock_unlock(ð_dev_cb_lock);
+ return rc;
+}
+
+void
+rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
+{
+ if (dev == NULL)
+ return;
+
+ /*
+ * for secondary process, at that point we expect device
+ * to be already 'usable', so shared data and all function pointers
+ * for fast-path devops have to be setup properly inside rte_eth_dev.
+ */
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+ eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
+
+ dev->state = RTE_ETH_DEV_ATTACHED;
+}
+
+int
+rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
+{
+ if (eth_dev == NULL)
+ return -EINVAL;
+
+ eth_dev_shared_data_prepare();
+
+ if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+ rte_eth_dev_callback_process(eth_dev,
+ RTE_ETH_EVENT_DESTROY, NULL);
+
+ eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
+ rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
+
+ eth_dev->state = RTE_ETH_DEV_UNUSED;
+ eth_dev->device = NULL;
+ eth_dev->process_private = NULL;
+ eth_dev->intr_handle = NULL;
+ eth_dev->rx_pkt_burst = NULL;
+ eth_dev->tx_pkt_burst = NULL;
+ eth_dev->tx_pkt_prepare = NULL;
+ eth_dev->rx_queue_count = NULL;
+ eth_dev->rx_descriptor_status = NULL;
+ eth_dev->tx_descriptor_status = NULL;
+ eth_dev->dev_ops = NULL;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ rte_free(eth_dev->data->rx_queues);
+ rte_free(eth_dev->data->tx_queues);
+ rte_free(eth_dev->data->mac_addrs);
+ rte_free(eth_dev->data->hash_mac_addrs);
+ rte_free(eth_dev->data->dev_private);
+ pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
+ memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
+
+ return 0;
+}
+
+int
+rte_eth_dev_create(struct rte_device *device, const char *name,
+ size_t priv_data_size,
+ ethdev_bus_specific_init ethdev_bus_specific_init,
+ void *bus_init_params,
+ ethdev_init_t ethdev_init, void *init_params)
+{
+ struct rte_eth_dev *ethdev;
+ int retval;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ethdev = rte_eth_dev_allocate(name);
+ if (!ethdev)
+ return -ENODEV;
+
+ if (priv_data_size) {
+ ethdev->data->dev_private = rte_zmalloc_socket(
+ name, priv_data_size, RTE_CACHE_LINE_SIZE,
+ device->numa_node);
+
+ if (!ethdev->data->dev_private) {
+ RTE_ETHDEV_LOG(ERR,
+ "failed to allocate private data\n");
+ retval = -ENOMEM;
+ goto probe_failed;
+ }
+ }
+ } else {
+ ethdev = rte_eth_dev_attach_secondary(name);
+ if (!ethdev) {
+ RTE_ETHDEV_LOG(ERR,
+ "secondary process attach failed, ethdev doesn't exist\n");
+ return -ENODEV;
+ }
+ }
+
+ ethdev->device = device;
+
+ if (ethdev_bus_specific_init) {
+ retval = ethdev_bus_specific_init(ethdev, bus_init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR,
+ "ethdev bus specific initialisation failed\n");
+ goto probe_failed;
+ }
+ }
+
+ retval = ethdev_init(ethdev, init_params);
+ if (retval) {
+ RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
+ goto probe_failed;
+ }
+
+ rte_eth_dev_probing_finish(ethdev);
+
+ return retval;
+
+probe_failed:
+ rte_eth_dev_release_port(ethdev);
+ return retval;
+}
+
+int
+rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
+ ethdev_uninit_t ethdev_uninit)
+{
+ int ret;
+
+ ethdev = rte_eth_dev_allocated(ethdev->data->name);
+ if (!ethdev)
+ return -ENODEV;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
+
+ ret = ethdev_uninit(ethdev);
+ if (ret)
+ return ret;
+
+ return rte_eth_dev_release_port(ethdev);
+}
+
+struct rte_eth_dev *
+rte_eth_dev_get_by_name(const char *name)
+{
+ uint16_t pid;
+
+ if (rte_eth_dev_get_port_by_name(name, &pid))
+ return NULL;
+
+ return &rte_eth_devices[pid];
+}
+
+int
+rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+int
+rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
+ return 1;
+ return 0;
+}
+
+void
+rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
+{
+ if (dev->data->dev_started) {
+ RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
+ dev->data->port_id);
+ return;
+ }
+
+ eth_dev_rx_queue_config(dev, 0);
+ eth_dev_tx_queue_config(dev, 0);
+
+ memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
+}
+
+static int
+eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
+{
+ int state;
+ struct rte_kvargs_pair *pair;
+ char *letter;
+
+ arglist->str = strdup(str_in);
+ if (arglist->str == NULL)
+ return -ENOMEM;
+
+ letter = arglist->str;
+ state = 0;
+ arglist->count = 0;
+ pair = &arglist->pairs[0];
+ while (1) {
+ switch (state) {
+ case 0: /* Initial */
+ if (*letter == '=')
+ return -EINVAL;
+ else if (*letter == '\0')
+ return 0;
+
+ state = 1;
+ pair->key = letter;
+ /* fallthrough */
+
+ case 1: /* Parsing key */
+ if (*letter == '=') {
+ *letter = '\0';
+ pair->value = letter + 1;
+ state = 2;
+ } else if (*letter == ',' || *letter == '\0')
+ return -EINVAL;
+ break;
+
+
+ case 2: /* Parsing value */
+ if (*letter == '[')
+ state = 3;
+ else if (*letter == ',') {
+ *letter = '\0';
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ } else if (*letter == '\0') {
+ letter--;
+ arglist->count++;
+ pair = &arglist->pairs[arglist->count];
+ state = 0;
+ }
+ break;
+
+ case 3: /* Parsing list */
+ if (*letter == ']')
+ state = 2;
+ else if (*letter == '\0')
+ return -EINVAL;
+ break;
+ }
+ letter++;
+ }
+}
+
+int
+rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
+{
+ struct rte_kvargs args;
+ struct rte_kvargs_pair *pair;
+ unsigned int i;
+ int result = 0;
+
+ memset(eth_da, 0, sizeof(*eth_da));
+
+ result = eth_dev_devargs_tokenise(&args, dargs);
+ if (result < 0)
+ goto parse_cleanup;
+
+ for (i = 0; i < args.count; i++) {
+ pair = &args.pairs[i];
+ if (strcmp("representor", pair->key) == 0) {
+ if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
+ RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
+ dargs);
+ result = -1;
+ goto parse_cleanup;
+ }
+ result = rte_eth_devargs_parse_representor_ports(
+ pair->value, eth_da);
+ if (result < 0)
+ goto parse_cleanup;
+ }
+ }
+
+parse_cleanup:
+ if (args.str)
+ free(args.str);
+
+ return result;
+}
+
+static inline int
+eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
+ const char *ring_name)
+{
+ return snprintf(name, len, "eth_p%d_q%d_%s",
+ port_id, queue_id, ring_name);
+}
+
+int
+rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc = 0;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ return -ENAMETOOLONG;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz)
+ rc = rte_memzone_free(mz);
+ else
+ rc = -ENOENT;
+
+ return rc;
+}
+
+const struct rte_memzone *
+rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id, size_t size, unsigned int align,
+ int socket_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+ int rc;
+
+ rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
+ queue_id, ring_name);
+ if (rc >= RTE_MEMZONE_NAMESIZE) {
+ RTE_ETHDEV_LOG(ERR, "ring name too long\n");
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz) {
+ if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
+ size > mz->len ||
+ ((uintptr_t)mz->addr & (align - 1)) != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "memzone %s does not justify the requested attributes\n",
+ mz->name);
+ return NULL;
+ }
+
+ return mz;
+ }
+
+ return rte_memzone_reserve_aligned(z_name, size, socket_id,
+ RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+int
+rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
+ peer_info, direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[cur_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
+ direction);
+}
+
+int
+rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
+ struct rte_hairpin_peer_info *cur_info,
+ struct rte_hairpin_peer_info *peer_info,
+ uint32_t direction)
+{
+ struct rte_eth_dev *dev;
+
+ /* Current queue information is not mandatory. */
+ if (peer_info == NULL)
+ return -EINVAL;
+
+ /* No need to check the validity again. */
+ dev = &rte_eth_devices[peer_port];
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
+ -ENOTSUP);
+
+ return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
+ cur_info, peer_info, direction);
+}
+
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
+ .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
uint16_t
rte_eth_pkt_burst_dummy(void *queue __rte_unused,
@@ -11,3 +637,135 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
{
return 0;
}
+
+int
+rte_eth_representor_id_get(uint16_t port_id,
+ enum rte_eth_representor_type type,
+ int controller, int pf, int representor_port,
+ uint16_t *repr_id)
+{
+ int ret, n, count;
+ uint32_t i;
+ struct rte_eth_representor_info *info = NULL;
+ size_t size;
+
+ if (type == RTE_ETH_REPRESENTOR_NONE)
+ return 0;
+ if (repr_id == NULL)
+ return -EINVAL;
+
+ /* Get PMD representor range info. */
+ ret = rte_eth_representor_info_get(port_id, NULL);
+ if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
+ controller == -1 && pf == -1) {
+ /* Direct mapping for legacy VF representor. */
+ *repr_id = representor_port;
+ return 0;
+ } else if (ret < 0) {
+ return ret;
+ }
+ n = ret;
+ size = sizeof(*info) + n * sizeof(info->ranges[0]);
+ info = calloc(1, size);
+ if (info == NULL)
+ return -ENOMEM;
+ info->nb_ranges_alloc = n;
+ ret = rte_eth_representor_info_get(port_id, info);
+ if (ret < 0)
+ goto out;
+
+ /* Default controller and pf to caller. */
+ if (controller == -1)
+ controller = info->controller;
+ if (pf == -1)
+ pf = info->pf;
+
+ /* Locate representor ID. */
+ ret = -ENOENT;
+ for (i = 0; i < info->nb_ranges; ++i) {
+ if (info->ranges[i].type != type)
+ continue;
+ if (info->ranges[i].controller != controller)
+ continue;
+ if (info->ranges[i].id_end < info->ranges[i].id_base) {
+ RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
+ port_id, info->ranges[i].id_base,
+ info->ranges[i].id_end, i);
+ continue;
+
+ }
+ count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
+ switch (info->ranges[i].type) {
+ case RTE_ETH_REPRESENTOR_PF:
+ if (pf < info->ranges[i].pf ||
+ pf >= info->ranges[i].pf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (pf - info->ranges[i].pf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_VF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].vf ||
+ representor_port >= info->ranges[i].vf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].vf);
+ ret = 0;
+ goto out;
+ case RTE_ETH_REPRESENTOR_SF:
+ if (info->ranges[i].pf != pf)
+ continue;
+ if (representor_port < info->ranges[i].sf ||
+ representor_port >= info->ranges[i].sf + count)
+ continue;
+ *repr_id = info->ranges[i].id_base +
+ (representor_port - info->ranges[i].sf);
+ ret = 0;
+ goto out;
+ default:
+ break;
+ }
+ }
+out:
+ free(info);
+ return ret;
+}
+
+int
+rte_eth_switch_domain_alloc(uint16_t *domain_id)
+{
+ uint16_t i;
+
+ *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ if (eth_dev_switch_domains[i].state ==
+ RTE_ETH_SWITCH_DOMAIN_UNUSED) {
+ eth_dev_switch_domains[i].state =
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
+ *domain_id = i;
+ return 0;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+int
+rte_eth_switch_domain_free(uint16_t domain_id)
+{
+ if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
+ domain_id >= RTE_MAX_ETHPORTS)
+ return -EINVAL;
+
+ if (eth_dev_switch_domains[domain_id].state !=
+ RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
+ return -EINVAL;
+
+ eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
+
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 8fca20c7d45b..84dc0b320ed0 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -3,10 +3,22 @@
*/
#include <rte_debug.h>
+
#include "rte_ethdev.h"
#include "ethdev_driver.h"
#include "ethdev_private.h"
+static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
+
+/* Shared memory between primary and secondary processes. */
+struct eth_dev_shared *eth_dev_shared_data;
+
+/* spinlock for shared data allocation */
+static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
+
+/* spinlock for eth device callbacks */
+rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
+
uint16_t
eth_dev_to_id(const struct rte_eth_dev *dev)
{
@@ -302,3 +314,122 @@ rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
return nb_pkts;
}
+
+void
+eth_dev_shared_data_prepare(void)
+{
+ const unsigned int flags = 0;
+ const struct rte_memzone *mz;
+
+ rte_spinlock_lock(ð_dev_shared_data_lock);
+
+ if (eth_dev_shared_data == NULL) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* Allocate port data and ownership shared memory. */
+ mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
+ sizeof(*eth_dev_shared_data),
+ rte_socket_id(), flags);
+ } else
+ mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
+ if (mz == NULL)
+ rte_panic("Cannot allocate ethdev shared data\n");
+
+ eth_dev_shared_data = mz->addr;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ eth_dev_shared_data->next_owner_id =
+ RTE_ETH_DEV_NO_OWNER + 1;
+ rte_spinlock_init(ð_dev_shared_data->ownership_lock);
+ memset(eth_dev_shared_data->data, 0,
+ sizeof(eth_dev_shared_data->data));
+ }
+ }
+
+ rte_spinlock_unlock(ð_dev_shared_data_lock);
+}
+
+void
+eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **rxq = dev->data->rx_queues;
+
+ if (rxq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->rx_queue_release != NULL)
+ (*dev->dev_ops->rx_queue_release)(dev, qid);
+ rxq[qid] = NULL;
+}
+
+void
+eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ void **txq = dev->data->tx_queues;
+
+ if (txq[qid] == NULL)
+ return;
+
+ if (dev->dev_ops->tx_queue_release != NULL)
+ (*dev->dev_ops->tx_queue_release)(dev, qid);
+ txq[qid] = NULL;
+}
+
+int
+eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_rx_queues;
+ unsigned int i;
+
+ if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
+ sizeof(dev->data->rx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->rx_queues == NULL) {
+ dev->data->nb_rx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_rxq_release(dev, i);
+
+ rte_free(dev->data->rx_queues);
+ dev->data->rx_queues = NULL;
+ }
+ dev->data->nb_rx_queues = nb_queues;
+ return 0;
+}
+
+int
+eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+ uint16_t old_nb_queues = dev->data->nb_tx_queues;
+ unsigned int i;
+
+ if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
+ sizeof(dev->data->tx_queues[0]) *
+ RTE_MAX_QUEUES_PER_PORT,
+ RTE_CACHE_LINE_SIZE);
+ if (dev->data->tx_queues == NULL) {
+ dev->data->nb_tx_queues = 0;
+ return -(ENOMEM);
+ }
+ } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
+ for (i = nb_queues; i < old_nb_queues; i++)
+ eth_dev_txq_release(dev, i);
+
+ rte_free(dev->data->tx_queues);
+ dev->data->tx_queues = NULL;
+ }
+ dev->data->nb_tx_queues = nb_queues;
+ return 0;
+}
+
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index cc91025e8d9b..cc9879907ce5 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -5,10 +5,38 @@
#ifndef _ETH_PRIVATE_H_
#define _ETH_PRIVATE_H_
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
#include <rte_os_shim.h>
#include "rte_ethdev.h"
+struct eth_dev_shared {
+ uint64_t next_owner_id;
+ rte_spinlock_t ownership_lock;
+ struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
+};
+
+extern struct eth_dev_shared *eth_dev_shared_data;
+
+/**
+ * The user application callback description.
+ *
+ * It contains callback address to be registered by user application,
+ * the pointer to the parameters for callback, and the event type.
+ */
+struct rte_eth_dev_callback {
+ TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
+ rte_eth_dev_cb_fn cb_fn; /**< Callback address */
+ void *cb_arg; /**< Parameter for callback */
+ void *ret_param; /**< Return parameter */
+ enum rte_eth_event_type event; /**< Interrupt event type */
+ uint32_t active; /**< Callback is executing */
+};
+
+extern rte_spinlock_t eth_dev_cb_lock;
+
/*
* Convert rte_eth_dev pointer to port ID.
* NULL will be translated to RTE_MAX_ETHPORTS.
@@ -33,4 +61,12 @@ void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
const struct rte_eth_dev *dev);
+
+void eth_dev_shared_data_prepare(void);
+
+void eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid);
+void eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid);
+int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+int eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues);
+
#endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 2a479bea2128..70c850a2f18a 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -30,7 +30,6 @@
#include <rte_errno.h>
#include <rte_spinlock.h>
#include <rte_string_fns.h>
-#include <rte_kvargs.h>
#include <rte_class.h>
#include <rte_ether.h>
#include <rte_telemetry.h>
@@ -41,37 +40,23 @@
#include "ethdev_profile.h"
#include "ethdev_private.h"
-static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
/* public fast-path API */
struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
-/* spinlock for eth device callbacks */
-static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
-
/* spinlock for add/remove Rx callbacks */
static rte_spinlock_t eth_dev_rx_cb_lock = RTE_SPINLOCK_INITIALIZER;
/* spinlock for add/remove Tx callbacks */
static rte_spinlock_t eth_dev_tx_cb_lock = RTE_SPINLOCK_INITIALIZER;
-/* spinlock for shared data allocation */
-static rte_spinlock_t eth_dev_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
-
/* store statistics names and its offset in stats structure */
struct rte_eth_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
unsigned offset;
};
-/* Shared memory between primary and secondary processes. */
-static struct {
- uint64_t next_owner_id;
- rte_spinlock_t ownership_lock;
- struct rte_eth_dev_data data[RTE_MAX_ETHPORTS];
-} *eth_dev_shared_data;
-
static const struct rte_eth_xstats_name_off eth_dev_stats_strings[] = {
{"rx_good_packets", offsetof(struct rte_eth_stats, ipackets)},
{"tx_good_packets", offsetof(struct rte_eth_stats, opackets)},
@@ -175,21 +160,6 @@ static const struct {
{RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP, "FLOW_SHARED_OBJECT_KEEP"},
};
-/**
- * The user application callback description.
- *
- * It contains callback address to be registered by user application,
- * the pointer to the parameters for callback, and the event type.
- */
-struct rte_eth_dev_callback {
- TAILQ_ENTRY(rte_eth_dev_callback) next; /**< Callbacks list */
- rte_eth_dev_cb_fn cb_fn; /**< Callback address */
- void *cb_arg; /**< Parameter for callback */
- void *ret_param; /**< Return parameter */
- enum rte_eth_event_type event; /**< Interrupt event type */
- uint32_t active; /**< Callback is executing */
-};
-
enum {
STAT_QMAP_TX = 0,
STAT_QMAP_RX
@@ -399,227 +369,12 @@ rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id)
rte_eth_devices[ref_port_id].device);
}
-static void
-eth_dev_shared_data_prepare(void)
-{
- const unsigned flags = 0;
- const struct rte_memzone *mz;
-
- rte_spinlock_lock(ð_dev_shared_data_lock);
-
- if (eth_dev_shared_data == NULL) {
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Allocate port data and ownership shared memory. */
- mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
- sizeof(*eth_dev_shared_data),
- rte_socket_id(), flags);
- } else
- mz = rte_memzone_lookup(MZ_RTE_ETH_DEV_DATA);
- if (mz == NULL)
- rte_panic("Cannot allocate ethdev shared data\n");
-
- eth_dev_shared_data = mz->addr;
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- eth_dev_shared_data->next_owner_id =
- RTE_ETH_DEV_NO_OWNER + 1;
- rte_spinlock_init(ð_dev_shared_data->ownership_lock);
- memset(eth_dev_shared_data->data, 0,
- sizeof(eth_dev_shared_data->data));
- }
- }
-
- rte_spinlock_unlock(ð_dev_shared_data_lock);
-}
-
static bool
eth_dev_is_allocated(const struct rte_eth_dev *ethdev)
{
return ethdev->data->name[0] != '\0';
}
-static struct rte_eth_dev *
-eth_dev_allocated(const char *name)
-{
- uint16_t i;
-
- RTE_BUILD_BUG_ON(RTE_MAX_ETHPORTS >= UINT16_MAX);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].data != NULL &&
- strcmp(rte_eth_devices[i].data->name, name) == 0)
- return &rte_eth_devices[i];
- }
- return NULL;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocated(const char *name)
-{
- struct rte_eth_dev *ethdev;
-
- eth_dev_shared_data_prepare();
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- ethdev = eth_dev_allocated(name);
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return ethdev;
-}
-
-static uint16_t
-eth_dev_find_free_port(void)
-{
- uint16_t i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- /* Using shared name field to find a free port. */
- if (eth_dev_shared_data->data[i].name[0] == '\0') {
- RTE_ASSERT(rte_eth_devices[i].state ==
- RTE_ETH_DEV_UNUSED);
- return i;
- }
- }
- return RTE_MAX_ETHPORTS;
-}
-
-static struct rte_eth_dev *
-eth_dev_get(uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
-
- eth_dev->data = ð_dev_shared_data->data[port_id];
-
- return eth_dev;
-}
-
-struct rte_eth_dev *
-rte_eth_dev_allocate(const char *name)
-{
- uint16_t port_id;
- struct rte_eth_dev *eth_dev = NULL;
- size_t name_len;
-
- name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN);
- if (name_len == 0) {
- RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n");
- return NULL;
- }
-
- if (name_len >= RTE_ETH_NAME_MAX_LEN) {
- RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n");
- return NULL;
- }
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port creation between primary and secondary threads. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- if (eth_dev_allocated(name) != NULL) {
- RTE_ETHDEV_LOG(ERR,
- "Ethernet device with name %s already allocated\n",
- name);
- goto unlock;
- }
-
- port_id = eth_dev_find_free_port();
- if (port_id == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Reached maximum number of Ethernet ports\n");
- goto unlock;
- }
-
- eth_dev = eth_dev_get(port_id);
- strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
- eth_dev->data->port_id = port_id;
- eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
- eth_dev->data->mtu = RTE_ETHER_MTU;
- pthread_mutex_init(ð_dev->data->flow_ops_mutex, NULL);
-
-unlock:
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return eth_dev;
-}
-
-/*
- * Attach to a port already registered by the primary process, which
- * makes sure that the same device would have the same port ID both
- * in the primary and secondary process.
- */
-struct rte_eth_dev *
-rte_eth_dev_attach_secondary(const char *name)
-{
- uint16_t i;
- struct rte_eth_dev *eth_dev = NULL;
-
- eth_dev_shared_data_prepare();
-
- /* Synchronize port attachment to primary port creation and release. */
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (strcmp(eth_dev_shared_data->data[i].name, name) == 0)
- break;
- }
- if (i == RTE_MAX_ETHPORTS) {
- RTE_ETHDEV_LOG(ERR,
- "Device %s is not driven by the primary process\n",
- name);
- } else {
- eth_dev = eth_dev_get(i);
- RTE_ASSERT(eth_dev->data->port_id == i);
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
- return eth_dev;
-}
-
-int
-rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
-{
- if (eth_dev == NULL)
- return -EINVAL;
-
- eth_dev_shared_data_prepare();
-
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
- rte_eth_dev_callback_process(eth_dev,
- RTE_ETH_EVENT_DESTROY, NULL);
-
- eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
-
- rte_spinlock_lock(ð_dev_shared_data->ownership_lock);
-
- eth_dev->state = RTE_ETH_DEV_UNUSED;
- eth_dev->device = NULL;
- eth_dev->process_private = NULL;
- eth_dev->intr_handle = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- eth_dev->tx_pkt_prepare = NULL;
- eth_dev->rx_queue_count = NULL;
- eth_dev->rx_descriptor_status = NULL;
- eth_dev->tx_descriptor_status = NULL;
- eth_dev->dev_ops = NULL;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- rte_free(eth_dev->data->rx_queues);
- rte_free(eth_dev->data->tx_queues);
- rte_free(eth_dev->data->mac_addrs);
- rte_free(eth_dev->data->hash_mac_addrs);
- rte_free(eth_dev->data->dev_private);
- pthread_mutex_destroy(ð_dev->data->flow_ops_mutex);
- memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
- }
-
- rte_spinlock_unlock(ð_dev_shared_data->ownership_lock);
-
- return 0;
-}
-
int
rte_eth_dev_is_valid_port(uint16_t port_id)
{
@@ -894,17 +649,6 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id)
return -ENODEV;
}
-struct rte_eth_dev *
-rte_eth_dev_get_by_name(const char *name)
-{
- uint16_t pid;
-
- if (rte_eth_dev_get_port_by_name(name, &pid))
- return NULL;
-
- return &rte_eth_devices[pid];
-}
-
static int
eth_err(uint16_t port_id, int ret)
{
@@ -915,62 +659,6 @@ eth_err(uint16_t port_id, int ret)
return ret;
}
-static void
-eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **rxq = dev->data->rx_queues;
-
- if (rxq[qid] == NULL)
- return;
-
- if (dev->dev_ops->rx_queue_release != NULL)
- (*dev->dev_ops->rx_queue_release)(dev, qid);
- rxq[qid] = NULL;
-}
-
-static void
-eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- void **txq = dev->data->tx_queues;
-
- if (txq[qid] == NULL)
- return;
-
- if (dev->dev_ops->tx_queue_release != NULL)
- (*dev->dev_ops->tx_queue_release)(dev, qid);
- txq[qid] = NULL;
-}
-
-static int
-eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_rx_queues;
- unsigned i;
-
- if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
- sizeof(dev->data->rx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->rx_queues == NULL) {
- dev->data->nb_rx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- } else if (dev->data->rx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_rxq_release(dev, i);
-
- rte_free(dev->data->rx_queues);
- dev->data->rx_queues = NULL;
- }
- dev->data->nb_rx_queues = nb_queues;
- return 0;
-}
-
static int
eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
@@ -1161,36 +849,6 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
return eth_err(port_id, dev->dev_ops->tx_queue_stop(dev, tx_queue_id));
}
-static int
-eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
-{
- uint16_t old_nb_queues = dev->data->nb_tx_queues;
- unsigned i;
-
- if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
- dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
- sizeof(dev->data->tx_queues[0]) *
- RTE_MAX_QUEUES_PER_PORT,
- RTE_CACHE_LINE_SIZE);
- if (dev->data->tx_queues == NULL) {
- dev->data->nb_tx_queues = 0;
- return -(ENOMEM);
- }
- } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- } else if (dev->data->tx_queues != NULL && nb_queues == 0) {
- for (i = nb_queues; i < old_nb_queues; i++)
- eth_dev_txq_release(dev, i);
-
- rte_free(dev->data->tx_queues);
- dev->data->tx_queues = NULL;
- }
- dev->data->nb_tx_queues = nb_queues;
- return 0;
-}
-
uint32_t
rte_eth_speed_bitflag(uint32_t speed, int duplex)
{
@@ -1682,21 +1340,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
return ret;
}
-void
-rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
-{
- if (dev->data->dev_started) {
- RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n",
- dev->data->port_id);
- return;
- }
-
- eth_dev_rx_queue_config(dev, 0);
- eth_dev_tx_queue_config(dev, 0);
-
- memset(&dev->data->dev_conf, 0, sizeof(dev->data->dev_conf));
-}
-
static void
eth_dev_mac_restore(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info)
@@ -4914,52 +4557,6 @@ rte_eth_dev_callback_unregister(uint16_t port_id,
return ret;
}
-int
-rte_eth_dev_callback_process(struct rte_eth_dev *dev,
- enum rte_eth_event_type event, void *ret_param)
-{
- struct rte_eth_dev_callback *cb_lst;
- struct rte_eth_dev_callback dev_cb;
- int rc = 0;
-
- rte_spinlock_lock(ð_dev_cb_lock);
- TAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {
- if (cb_lst->cb_fn == NULL || cb_lst->event != event)
- continue;
- dev_cb = *cb_lst;
- cb_lst->active = 1;
- if (ret_param != NULL)
- dev_cb.ret_param = ret_param;
-
- rte_spinlock_unlock(ð_dev_cb_lock);
- rc = dev_cb.cb_fn(dev->data->port_id, dev_cb.event,
- dev_cb.cb_arg, dev_cb.ret_param);
- rte_spinlock_lock(ð_dev_cb_lock);
- cb_lst->active = 0;
- }
- rte_spinlock_unlock(ð_dev_cb_lock);
- return rc;
-}
-
-void
-rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
-{
- if (dev == NULL)
- return;
-
- /*
- * for secondary process, at that point we expect device
- * to be already 'usable', so shared data and all function pointers
- * for fast-path devops have to be setup properly inside rte_eth_dev.
- */
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
-
- rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
-
- dev->state = RTE_ETH_DEV_ATTACHED;
-}
-
int
rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
{
@@ -5032,156 +4629,6 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
return fd;
}
-static inline int
-eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_id,
- const char *ring_name)
-{
- return snprintf(name, len, "eth_p%d_q%d_%s",
- port_id, queue_id, ring_name);
-}
-
-const struct rte_memzone *
-rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id, size_t size, unsigned align,
- int socket_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- rte_errno = ENAMETOOLONG;
- return NULL;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz) {
- if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) ||
- size > mz->len ||
- ((uintptr_t)mz->addr & (align - 1)) != 0) {
- RTE_ETHDEV_LOG(ERR,
- "memzone %s does not justify the requested attributes\n",
- mz->name);
- return NULL;
- }
-
- return mz;
- }
-
- return rte_memzone_reserve_aligned(z_name, size, socket_id,
- RTE_MEMZONE_IOVA_CONTIG, align);
-}
-
-int
-rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
- uint16_t queue_id)
-{
- char z_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int rc = 0;
-
- rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id,
- queue_id, ring_name);
- if (rc >= RTE_MEMZONE_NAMESIZE) {
- RTE_ETHDEV_LOG(ERR, "ring name too long\n");
- return -ENAMETOOLONG;
- }
-
- mz = rte_memzone_lookup(z_name);
- if (mz)
- rc = rte_memzone_free(mz);
- else
- rc = -ENOENT;
-
- return rc;
-}
-
-int
-rte_eth_dev_create(struct rte_device *device, const char *name,
- size_t priv_data_size,
- ethdev_bus_specific_init ethdev_bus_specific_init,
- void *bus_init_params,
- ethdev_init_t ethdev_init, void *init_params)
-{
- struct rte_eth_dev *ethdev;
- int retval;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_init, -EINVAL);
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- ethdev = rte_eth_dev_allocate(name);
- if (!ethdev)
- return -ENODEV;
-
- if (priv_data_size) {
- ethdev->data->dev_private = rte_zmalloc_socket(
- name, priv_data_size, RTE_CACHE_LINE_SIZE,
- device->numa_node);
-
- if (!ethdev->data->dev_private) {
- RTE_ETHDEV_LOG(ERR,
- "failed to allocate private data\n");
- retval = -ENOMEM;
- goto probe_failed;
- }
- }
- } else {
- ethdev = rte_eth_dev_attach_secondary(name);
- if (!ethdev) {
- RTE_ETHDEV_LOG(ERR,
- "secondary process attach failed, ethdev doesn't exist\n");
- return -ENODEV;
- }
- }
-
- ethdev->device = device;
-
- if (ethdev_bus_specific_init) {
- retval = ethdev_bus_specific_init(ethdev, bus_init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR,
- "ethdev bus specific initialisation failed\n");
- goto probe_failed;
- }
- }
-
- retval = ethdev_init(ethdev, init_params);
- if (retval) {
- RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n");
- goto probe_failed;
- }
-
- rte_eth_dev_probing_finish(ethdev);
-
- return retval;
-
-probe_failed:
- rte_eth_dev_release_port(ethdev);
- return retval;
-}
-
-int
-rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
- ethdev_uninit_t ethdev_uninit)
-{
- int ret;
-
- ethdev = rte_eth_dev_allocated(ethdev->data->name);
- if (!ethdev)
- return -ENODEV;
-
- RTE_FUNC_PTR_OR_ERR_RET(*ethdev_uninit, -EINVAL);
-
- ret = ethdev_uninit(ethdev);
- if (ret)
- return ret;
-
- return rte_eth_dev_release_port(ethdev);
-}
-
int
rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
int epfd, int op, void *data)
@@ -6005,22 +5452,6 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id,
return eth_err(port_id, (*dev->dev_ops->hairpin_cap_get)(dev, cap));
}
-int
-rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->rx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
-int
-rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
-{
- if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_HAIRPIN)
- return 1;
- return 0;
-}
-
int
rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
{
@@ -6042,255 +5473,6 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
return (*dev->dev_ops->pool_ops_supported)(dev, pool);
}
-/**
- * A set of values to describe the possible states of a switch domain.
- */
-enum rte_eth_switch_domain_state {
- RTE_ETH_SWITCH_DOMAIN_UNUSED = 0,
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED
-};
-
-/**
- * Array of switch domains available for allocation. Array is sized to
- * RTE_MAX_ETHPORTS elements as there cannot be more active switch domains than
- * ethdev ports in a single process.
- */
-static struct rte_eth_dev_switch {
- enum rte_eth_switch_domain_state state;
-} eth_dev_switch_domains[RTE_MAX_ETHPORTS];
-
-int
-rte_eth_switch_domain_alloc(uint16_t *domain_id)
-{
- uint16_t i;
-
- *domain_id = RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (eth_dev_switch_domains[i].state ==
- RTE_ETH_SWITCH_DOMAIN_UNUSED) {
- eth_dev_switch_domains[i].state =
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED;
- *domain_id = i;
- return 0;
- }
- }
-
- return -ENOSPC;
-}
-
-int
-rte_eth_switch_domain_free(uint16_t domain_id)
-{
- if (domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID ||
- domain_id >= RTE_MAX_ETHPORTS)
- return -EINVAL;
-
- if (eth_dev_switch_domains[domain_id].state !=
- RTE_ETH_SWITCH_DOMAIN_ALLOCATED)
- return -EINVAL;
-
- eth_dev_switch_domains[domain_id].state = RTE_ETH_SWITCH_DOMAIN_UNUSED;
-
- return 0;
-}
-
-static int
-eth_dev_devargs_tokenise(struct rte_kvargs *arglist, const char *str_in)
-{
- int state;
- struct rte_kvargs_pair *pair;
- char *letter;
-
- arglist->str = strdup(str_in);
- if (arglist->str == NULL)
- return -ENOMEM;
-
- letter = arglist->str;
- state = 0;
- arglist->count = 0;
- pair = &arglist->pairs[0];
- while (1) {
- switch (state) {
- case 0: /* Initial */
- if (*letter == '=')
- return -EINVAL;
- else if (*letter == '\0')
- return 0;
-
- state = 1;
- pair->key = letter;
- /* fall-thru */
-
- case 1: /* Parsing key */
- if (*letter == '=') {
- *letter = '\0';
- pair->value = letter + 1;
- state = 2;
- } else if (*letter == ',' || *letter == '\0')
- return -EINVAL;
- break;
-
-
- case 2: /* Parsing value */
- if (*letter == '[')
- state = 3;
- else if (*letter == ',') {
- *letter = '\0';
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- } else if (*letter == '\0') {
- letter--;
- arglist->count++;
- pair = &arglist->pairs[arglist->count];
- state = 0;
- }
- break;
-
- case 3: /* Parsing list */
- if (*letter == ']')
- state = 2;
- else if (*letter == '\0')
- return -EINVAL;
- break;
- }
- letter++;
- }
-}
-
-int
-rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
-{
- struct rte_kvargs args;
- struct rte_kvargs_pair *pair;
- unsigned int i;
- int result = 0;
-
- memset(eth_da, 0, sizeof(*eth_da));
-
- result = eth_dev_devargs_tokenise(&args, dargs);
- if (result < 0)
- goto parse_cleanup;
-
- for (i = 0; i < args.count; i++) {
- pair = &args.pairs[i];
- if (strcmp("representor", pair->key) == 0) {
- if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) {
- RTE_LOG(ERR, EAL, "duplicated representor key: %s\n",
- dargs);
- result = -1;
- goto parse_cleanup;
- }
- result = rte_eth_devargs_parse_representor_ports(
- pair->value, eth_da);
- if (result < 0)
- goto parse_cleanup;
- }
- }
-
-parse_cleanup:
- if (args.str)
- free(args.str);
-
- return result;
-}
-
-int
-rte_eth_representor_id_get(uint16_t port_id,
- enum rte_eth_representor_type type,
- int controller, int pf, int representor_port,
- uint16_t *repr_id)
-{
- int ret, n, count;
- uint32_t i;
- struct rte_eth_representor_info *info = NULL;
- size_t size;
-
- if (type == RTE_ETH_REPRESENTOR_NONE)
- return 0;
- if (repr_id == NULL)
- return -EINVAL;
-
- /* Get PMD representor range info. */
- ret = rte_eth_representor_info_get(port_id, NULL);
- if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
- controller == -1 && pf == -1) {
- /* Direct mapping for legacy VF representor. */
- *repr_id = representor_port;
- return 0;
- } else if (ret < 0) {
- return ret;
- }
- n = ret;
- size = sizeof(*info) + n * sizeof(info->ranges[0]);
- info = calloc(1, size);
- if (info == NULL)
- return -ENOMEM;
- info->nb_ranges_alloc = n;
- ret = rte_eth_representor_info_get(port_id, info);
- if (ret < 0)
- goto out;
-
- /* Default controller and pf to caller. */
- if (controller == -1)
- controller = info->controller;
- if (pf == -1)
- pf = info->pf;
-
- /* Locate representor ID. */
- ret = -ENOENT;
- for (i = 0; i < info->nb_ranges; ++i) {
- if (info->ranges[i].type != type)
- continue;
- if (info->ranges[i].controller != controller)
- continue;
- if (info->ranges[i].id_end < info->ranges[i].id_base) {
- RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
- port_id, info->ranges[i].id_base,
- info->ranges[i].id_end, i);
- continue;
-
- }
- count = info->ranges[i].id_end - info->ranges[i].id_base + 1;
- switch (info->ranges[i].type) {
- case RTE_ETH_REPRESENTOR_PF:
- if (pf < info->ranges[i].pf ||
- pf >= info->ranges[i].pf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (pf - info->ranges[i].pf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_VF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].vf ||
- representor_port >= info->ranges[i].vf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].vf);
- ret = 0;
- goto out;
- case RTE_ETH_REPRESENTOR_SF:
- if (info->ranges[i].pf != pf)
- continue;
- if (representor_port < info->ranges[i].sf ||
- representor_port >= info->ranges[i].sf + count)
- continue;
- *repr_id = info->ranges[i].id_base +
- (representor_port - info->ranges[i].sf);
- ret = 0;
- goto out;
- default:
- break;
- }
- }
-out:
- free(info);
- return ret;
-}
-
static int
eth_dev_handle_port_list(const char *cmd __rte_unused,
const char *params __rte_unused,
@@ -6533,61 +5715,6 @@ eth_dev_handle_port_info(const char *cmd __rte_unused,
return 0;
}
-int
-rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
- struct rte_hairpin_peer_info *cur_info,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* Current queue information is not mandatory. */
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[peer_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_update,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_update)(dev, peer_queue,
- cur_info, peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
- struct rte_hairpin_peer_info *peer_info,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- if (peer_info == NULL)
- return -EINVAL;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_bind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_bind)(dev, cur_queue,
- peer_info, direction);
-}
-
-int
-rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
- uint32_t direction)
-{
- struct rte_eth_dev *dev;
-
- /* No need to check the validity again. */
- dev = &rte_eth_devices[cur_port];
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hairpin_queue_peer_unbind,
- -ENOTSUP);
-
- return (*dev->dev_ops->hairpin_queue_peer_unbind)(dev, cur_queue,
- direction);
-}
-
int
rte_eth_representor_info_get(uint16_t port_id,
struct rte_eth_representor_info *info)
@@ -6722,34 +5849,6 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
-int
-rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
-{
- static const struct rte_mbuf_dynfield field_desc = {
- .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
- .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
- .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
- };
- static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
- .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
- };
- int offset;
-
- offset = rte_mbuf_dynfield_register(&field_desc);
- if (offset < 0)
- return -1;
- if (field_offset != NULL)
- *field_offset = offset;
-
- offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
- if (offset < 0)
- return -1;
- if (flag_offset != NULL)
- *flag_offset = offset;
-
- return 0;
-}
-
int
rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
{
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function
2022-02-11 19:11 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
@ 2022-02-11 20:18 ` Ferruh Yigit
1 sibling, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2022-02-11 20:18 UTC (permalink / raw)
To: Ciara Loftus, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Hemant Agrawal, Sachin Saxena, John Daley, Hyong Youb Kim,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Matan Azrad, Viacheslav Ovsiienko,
Gagandeep Singh, Devendra Singh Rawat, Thomas Monjalon,
Andrew Rybchenko, Ray Kinsella
Cc: dev, Morten Brørup
On 2/11/2022 7:11 PM, Ferruh Yigit wrote:
> Multiple PMDs have dummy/noop Rx/Tx packet burst functions.
>
> These dummy functions are very simple, introduce a common function in
> the ethdev and update drivers to use it instead of each driver having
> its own functions.
>
> Signed-off-by: Ferruh Yigit<ferruh.yigit@intel.com>
> Acked-by: Morten Brørup<mb@smartsharesystems.com>
> Acked-by: Viacheslav Ovsiienko<viacheslavo@nvidia.com>
> Acked-by: Thomas Monjalon<thomas@monjalon.net>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2022-02-11 20:18 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-08 19:44 [PATCH] ethdev: introduce generic dummy packet burst function Ferruh Yigit
2022-02-10 7:38 ` Loftus, Ciara
2022-02-10 8:59 ` Ferruh Yigit
2022-02-10 11:04 ` Morten Brørup
2022-02-10 11:39 ` Andrew Rybchenko
2022-02-10 11:47 ` Morten Brørup
2022-02-10 11:51 ` Andrew Rybchenko
2022-02-10 14:52 ` Slava Ovsiienko
2022-02-10 13:58 ` Ferruh Yigit
2022-02-10 16:30 ` Stephen Hemminger
2022-02-10 18:40 ` Thomas Monjalon
2022-02-11 9:49 ` [PATCH v2] " Ferruh Yigit
2022-02-11 17:14 ` [PATCH v3 1/2] " Ferruh Yigit
2022-02-11 17:14 ` [PATCH v3 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 18:09 ` Thomas Monjalon
2022-02-11 18:39 ` Ferruh Yigit
2022-02-11 18:03 ` [PATCH v3 1/2] ethdev: introduce generic dummy packet burst function Thomas Monjalon
2022-02-11 18:38 ` [PATCH v4 " Ferruh Yigit
2022-02-11 18:38 ` [PATCH v4 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 18:55 ` Thomas Monjalon
2022-02-11 19:01 ` Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
2022-02-11 19:11 ` [PATCH v5 2/2] ethdev: move driver interface functions to its own file Ferruh Yigit
2022-02-11 20:18 ` [PATCH v5 1/2] ethdev: introduce generic dummy packet burst function Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).