* [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers
@ 2014-05-26 11:31 David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 1/5] ethdev: retrieve flow control configuration David Marchand
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev
This patchset introduces 3 new ethdev operations: flow control parameters
retrieval and mtu get/set operations.
--
David Marchand
David Marchand (1):
ethdev: add autoneg parameter in flow ctrl accessors
Ivan Boule (2):
ixgbe: add get/set_mtu to ixgbevf
app/testpmd: allow to configure mtu
Samuel Gauthier (1):
ethdev: add mtu accessors
Zijie Pan (1):
ethdev: retrieve flow control configuration
app/test-pmd/cmdline.c | 54 ++++++++++++
app/test-pmd/config.c | 13 +++
app/test-pmd/testpmd.h | 2 +-
lib/librte_ether/rte_ethdev.c | 47 ++++++++++
lib/librte_ether/rte_ethdev.h | 62 ++++++++++++-
lib/librte_pmd_e1000/e1000_ethdev.h | 4 +
lib/librte_pmd_e1000/em_ethdev.c | 111 +++++++++++++++++++++++
lib/librte_pmd_e1000/em_rxtx.c | 11 +++
lib/librte_pmd_e1000/igb_ethdev.c | 111 +++++++++++++++++++++++
lib/librte_pmd_e1000/igb_rxtx.c | 10 +++
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 165 ++++++++++++++++++++++++++++++++++-
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 2 +
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 28 +++++-
13 files changed, 614 insertions(+), 6 deletions(-)
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 1/5] ethdev: retrieve flow control configuration
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
@ 2014-05-26 11:31 ` David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 2/5] ethdev: add autoneg parameter in flow ctrl accessors David Marchand
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev; +Cc: Zijie Pan
From: Zijie Pan <zijie.pan@6wind.com>
This patch adds a new function in ethdev api to retrieve current flow control
configuration.
This operation has been implemented for rte_em_pmd, rte_igb_pmd and
rte_ixgbe_pmd.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
---
lib/librte_ether/rte_ethdev.c | 16 ++++++++++
lib/librte_ether/rte_ethdev.h | 24 +++++++++++++--
lib/librte_pmd_e1000/em_ethdev.c | 44 ++++++++++++++++++++++++++++
lib/librte_pmd_e1000/igb_ethdev.c | 44 ++++++++++++++++++++++++++++
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 55 +++++++++++++++++++++++++++++++++--
5 files changed, 179 insertions(+), 4 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index dabbdd2..31c18ef 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1456,6 +1456,22 @@ rte_eth_dev_fdir_set_masks(uint8_t port_id, struct rte_fdir_masks *fdir_mask)
}
int
+rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
+{
+ struct rte_eth_dev *dev;
+
+ if (port_id >= nb_ports) {
+ PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+ return (-ENODEV);
+ }
+
+ dev = &rte_eth_devices[port_id];
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->flow_ctrl_get, -ENOTSUP);
+ memset(fc_conf, 0, sizeof(*fc_conf));
+ return (*dev->dev_ops->flow_ctrl_get)(dev, fc_conf);
+}
+
+int
rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf)
{
struct rte_eth_dev *dev;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index d839b8c..04533e5 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -955,8 +955,12 @@ typedef int (*fdir_set_masks_t)(struct rte_eth_dev *dev,
struct rte_fdir_masks *fdir_masks);
/**< @internal Setup flow director masks on an Ethernet device */
+typedef int (*flow_ctrl_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf);
+/**< @internal Get current flow control parameter on an Ethernet device */
+
typedef int (*flow_ctrl_set_t)(struct rte_eth_dev *dev,
- struct rte_eth_fc_conf *fc_conf);
+ struct rte_eth_fc_conf *fc_conf);
/**< @internal Setup flow control parameter on an Ethernet device */
typedef int (*priority_flow_ctrl_set_t)(struct rte_eth_dev *dev,
@@ -1120,6 +1124,7 @@ struct eth_dev_ops {
eth_queue_release_t tx_queue_release;/**< Release TX queue.*/
eth_dev_led_on_t dev_led_on; /**< Turn on LED. */
eth_dev_led_off_t dev_led_off; /**< Turn off LED. */
+ flow_ctrl_get_t flow_ctrl_get; /**< Get flow control. */
flow_ctrl_set_t flow_ctrl_set; /**< Setup flow control. */
priority_flow_ctrl_set_t priority_flow_ctrl_set; /**< Setup priority flow control.*/
eth_mac_addr_remove_t mac_addr_remove; /**< Remove MAC address */
@@ -2308,6 +2313,21 @@ int rte_eth_led_on(uint8_t port_id);
int rte_eth_led_off(uint8_t port_id);
/**
+ * Get current status of the Ethernet link flow control for Ethernet device
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param fc_conf
+ * The pointer to the structure where to store the flow control parameters.
+ * @return
+ * - (0) if successful.
+ * - (-ENOTSUP) if hardware doesn't support flow control.
+ * - (-ENODEV) if *port_id* invalid.
+ */
+int rte_eth_dev_flow_ctrl_get(uint8_t port_id,
+ struct rte_eth_fc_conf *fc_conf);
+
+/**
* Configure the Ethernet link flow control for Ethernet device
*
* @param port_id
@@ -2322,7 +2342,7 @@ int rte_eth_led_off(uint8_t port_id);
* - (-EIO) if flow control setup failure
*/
int rte_eth_dev_flow_ctrl_set(uint8_t port_id,
- struct rte_eth_fc_conf *fc_conf);
+ struct rte_eth_fc_conf *fc_conf);
/**
* Configure the Ethernet priority flow control under DCB environment
diff --git a/lib/librte_pmd_e1000/em_ethdev.c b/lib/librte_pmd_e1000/em_ethdev.c
index 2f0e1a0..74dc7e5 100644
--- a/lib/librte_pmd_e1000/em_ethdev.c
+++ b/lib/librte_pmd_e1000/em_ethdev.c
@@ -77,6 +77,8 @@ static void eth_em_stats_get(struct rte_eth_dev *dev,
static void eth_em_stats_reset(struct rte_eth_dev *dev);
static void eth_em_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+static int eth_em_flow_ctrl_get(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf);
static int eth_em_flow_ctrl_set(struct rte_eth_dev *dev,
struct rte_eth_fc_conf *fc_conf);
static int eth_em_interrupt_setup(struct rte_eth_dev *dev);
@@ -153,6 +155,7 @@ static struct eth_dev_ops eth_em_ops = {
.tx_queue_release = eth_em_tx_queue_release,
.dev_led_on = eth_em_led_on,
.dev_led_off = eth_em_led_off,
+ .flow_ctrl_get = eth_em_flow_ctrl_get,
.flow_ctrl_set = eth_em_flow_ctrl_set,
.mac_addr_add = eth_em_rar_set,
.mac_addr_remove = eth_em_rar_clear,
@@ -1363,6 +1366,47 @@ eth_em_led_off(struct rte_eth_dev *dev)
}
static int
+eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+ struct e1000_hw *hw;
+ uint32_t ctrl;
+ int tx_pause;
+ int rx_pause;
+
+ hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ fc_conf->pause_time = hw->fc.pause_time;
+ fc_conf->high_water = hw->fc.high_water;
+ fc_conf->low_water = hw->fc.low_water;
+ fc_conf->send_xon = hw->fc.send_xon;
+
+ /*
+ * Return rx_pause and tx_pause status according to actual setting of
+ * the TFCE and RFCE bits in the CTRL register.
+ */
+ ctrl = E1000_READ_REG(hw, E1000_CTRL);
+ if (ctrl & E1000_CTRL_TFCE)
+ tx_pause= 1;
+ else
+ tx_pause = 0;
+
+ if (ctrl & E1000_CTRL_RFCE)
+ rx_pause = 1;
+ else
+ rx_pause = 0;
+
+ if (rx_pause && tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+ return 0;
+}
+
+static int
eth_em_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
{
struct e1000_hw *hw;
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index 777413e..6dc82c2 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -72,6 +72,8 @@ static void eth_igb_stats_get(struct rte_eth_dev *dev,
static void eth_igb_stats_reset(struct rte_eth_dev *dev);
static void eth_igb_infos_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+static int eth_igb_flow_ctrl_get(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf);
static int eth_igb_flow_ctrl_set(struct rte_eth_dev *dev,
struct rte_eth_fc_conf *fc_conf);
static int eth_igb_lsc_interrupt_setup(struct rte_eth_dev *dev);
@@ -189,6 +191,7 @@ static struct eth_dev_ops eth_igb_ops = {
.tx_queue_release = eth_igb_tx_queue_release,
.dev_led_on = eth_igb_led_on,
.dev_led_off = eth_igb_led_off,
+ .flow_ctrl_get = eth_igb_flow_ctrl_get,
.flow_ctrl_set = eth_igb_flow_ctrl_set,
.mac_addr_add = eth_igb_rar_set,
.mac_addr_remove = eth_igb_rar_clear,
@@ -1803,6 +1806,47 @@ eth_igb_led_off(struct rte_eth_dev *dev)
}
static int
+eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+ struct e1000_hw *hw;
+ uint32_t ctrl;
+ int tx_pause;
+ int rx_pause;
+
+ hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ fc_conf->pause_time = hw->fc.pause_time;
+ fc_conf->high_water = hw->fc.high_water;
+ fc_conf->low_water = hw->fc.low_water;
+ fc_conf->send_xon = hw->fc.send_xon;
+
+ /*
+ * Return rx_pause and tx_pause status according to actual setting of
+ * the TFCE and RFCE bits in the CTRL register.
+ */
+ ctrl = E1000_READ_REG(hw, E1000_CTRL);
+ if (ctrl & E1000_CTRL_TFCE)
+ tx_pause= 1;
+ else
+ tx_pause = 0;
+
+ if (ctrl & E1000_CTRL_RFCE)
+ rx_pause = 1;
+ else
+ rx_pause = 0;
+
+ if (rx_pause && tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+ return 0;
+}
+
+static int
eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
{
struct e1000_hw *hw;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index d1718e1..4633654 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -129,8 +129,10 @@ static void ixgbe_vlan_hw_extend_disable(struct rte_eth_dev *dev);
static int ixgbe_dev_led_on(struct rte_eth_dev *dev);
static int ixgbe_dev_led_off(struct rte_eth_dev *dev);
-static int ixgbe_flow_ctrl_set(struct rte_eth_dev *dev,
- struct rte_eth_fc_conf *fc_conf);
+static int ixgbe_flow_ctrl_get(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf);
+static int ixgbe_flow_ctrl_set(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf);
static int ixgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
struct rte_eth_pfc_conf *pfc_conf);
static int ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
@@ -274,6 +276,7 @@ static struct eth_dev_ops ixgbe_eth_dev_ops = {
.tx_queue_release = ixgbe_dev_tx_queue_release,
.dev_led_on = ixgbe_dev_led_on,
.dev_led_off = ixgbe_dev_led_off,
+ .flow_ctrl_get = ixgbe_flow_ctrl_get,
.flow_ctrl_set = ixgbe_flow_ctrl_set,
.priority_flow_ctrl_set = ixgbe_priority_flow_ctrl_set,
.mac_addr_add = ixgbe_add_rar,
@@ -2176,6 +2179,54 @@ ixgbe_dev_led_off(struct rte_eth_dev *dev)
}
static int
+ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+ struct ixgbe_hw *hw;
+ uint32_t mflcn_reg;
+ uint32_t fccfg_reg;
+ int rx_pause;
+ int tx_pause;
+
+ hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ fc_conf->pause_time = hw->fc.pause_time;
+ fc_conf->high_water = hw->fc.high_water[0];
+ fc_conf->low_water = hw->fc.low_water[0];
+ fc_conf->send_xon = hw->fc.send_xon;
+
+ /*
+ * Return rx_pause status according to actual setting of
+ * MFLCN register.
+ */
+ mflcn_reg = IXGBE_READ_REG(hw, IXGBE_MFLCN);
+ if (mflcn_reg & (IXGBE_MFLCN_RPFCE | IXGBE_MFLCN_RFCE))
+ rx_pause = 1;
+ else
+ rx_pause = 0;
+
+ /*
+ * Return tx_pause status according to actual setting of
+ * FCCFG register.
+ */
+ fccfg_reg = IXGBE_READ_REG(hw, IXGBE_FCCFG);
+ if (fccfg_reg & (IXGBE_FCCFG_TFCE_802_3X | IXGBE_FCCFG_TFCE_PRIORITY))
+ tx_pause = 1;
+ else
+ tx_pause = 0;
+
+ if (rx_pause && tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+ return 0;
+}
+
+static int
ixgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
{
struct ixgbe_hw *hw;
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 2/5] ethdev: add autoneg parameter in flow ctrl accessors
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 1/5] ethdev: retrieve flow control configuration David Marchand
@ 2014-05-26 11:31 ` David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors David Marchand
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev
Add autoneg field in flow control parameters.
This makes it easier to understand why changing some parameters does not always
have the expected result.
Changing autoneg is not supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
---
lib/librte_ether/rte_ethdev.h | 1 +
lib/librte_pmd_e1000/em_ethdev.c | 3 +++
lib/librte_pmd_e1000/igb_ethdev.c | 3 +++
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 3 +++
4 files changed, 10 insertions(+)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 04533e5..39351ea 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -550,6 +550,7 @@ struct rte_eth_fc_conf {
uint16_t send_xon; /**< Is XON frame need be sent */
enum rte_eth_fc_mode mode; /**< Link flow control mode */
uint8_t mac_ctrl_frame_fwd; /**< Forward MAC control frames */
+ uint8_t autoneg; /**< Use Pause autoneg */
};
/**
diff --git a/lib/librte_pmd_e1000/em_ethdev.c b/lib/librte_pmd_e1000/em_ethdev.c
index 74dc7e5..c148cbc 100644
--- a/lib/librte_pmd_e1000/em_ethdev.c
+++ b/lib/librte_pmd_e1000/em_ethdev.c
@@ -1378,6 +1378,7 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc_conf->high_water = hw->fc.high_water;
fc_conf->low_water = hw->fc.low_water;
fc_conf->send_xon = hw->fc.send_xon;
+ fc_conf->autoneg = hw->mac.autoneg;
/*
* Return rx_pause and tx_pause status according to actual setting of
@@ -1422,6 +1423,8 @@ eth_em_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
uint32_t rctl;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ if (fc_conf->autoneg != hw->mac.autoneg)
+ return -ENOTSUP;
rx_buf_size = em_get_rx_buffer_size(hw);
PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x \n", rx_buf_size);
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index 6dc82c2..e15fe5a 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -1818,6 +1818,7 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc_conf->high_water = hw->fc.high_water;
fc_conf->low_water = hw->fc.low_water;
fc_conf->send_xon = hw->fc.send_xon;
+ fc_conf->autoneg = hw->mac.autoneg;
/*
* Return rx_pause and tx_pause status according to actual setting of
@@ -1862,6 +1863,8 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
uint32_t rctl;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ if (fc_conf->autoneg != hw->mac.autoneg)
+ return -ENOTSUP;
rx_buf_size = igb_get_rx_buffer_size(hw);
PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x \n", rx_buf_size);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 4633654..c876c3e 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -2193,6 +2193,7 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
fc_conf->high_water = hw->fc.high_water[0];
fc_conf->low_water = hw->fc.low_water[0];
fc_conf->send_xon = hw->fc.send_xon;
+ fc_conf->autoneg = !hw->fc.disable_fc_autoneg;
/*
* Return rx_pause status according to actual setting of
@@ -2244,6 +2245,8 @@ ixgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
PMD_INIT_FUNC_TRACE();
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ if (fc_conf->autoneg != !hw->fc.disable_fc_autoneg)
+ return -ENOTSUP;
rx_buf_size = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(0));
PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x \n", rx_buf_size);
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 1/5] ethdev: retrieve flow control configuration David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 2/5] ethdev: add autoneg parameter in flow ctrl accessors David Marchand
@ 2014-05-26 11:31 ` David Marchand
2014-06-10 17:23 ` Ananyev, Konstantin
2014-05-26 11:31 ` [dpdk-dev] [PATCH 4/5] ixgbe: add get/set_mtu to ixgbevf David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 5/5] app/testpmd: allow to configure mtu David Marchand
4 siblings, 1 reply; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev
From: Samuel Gauthier <samuel.gauthier@6wind.com>
This patch adds two new functions in ethdev api to retrieve current MTU and
change MTU of a port.
These operations have been implemented for rte_em_pmd, rte_igb_pmd and
rte_ixgbe_pmd.
Signed-off-by: Samuel Gauthier <samuel.gauthier@6wind.com>
Signed-off-by: Ivan Boule <ivan.boule@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
---
lib/librte_ether/rte_ethdev.c | 31 +++++++++++++++++
lib/librte_ether/rte_ethdev.h | 37 ++++++++++++++++++++
lib/librte_pmd_e1000/e1000_ethdev.h | 4 +++
lib/librte_pmd_e1000/em_ethdev.c | 64 +++++++++++++++++++++++++++++++++++
lib/librte_pmd_e1000/em_rxtx.c | 11 ++++++
lib/librte_pmd_e1000/igb_ethdev.c | 64 +++++++++++++++++++++++++++++++++++
lib/librte_pmd_e1000/igb_rxtx.c | 10 ++++++
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 62 +++++++++++++++++++++++++++++++++
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 2 ++
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 10 ++++++
10 files changed, 295 insertions(+)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 31c18ef..ece2a68 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1058,6 +1058,37 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr)
ether_addr_copy(&dev->data->mac_addrs[0], mac_addr);
}
+
+int
+rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu)
+{
+ struct rte_eth_dev *dev;
+
+ if (port_id >= nb_ports) {
+ PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+ return (-ENODEV);
+ }
+
+ dev = &rte_eth_devices[port_id];
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_get, -ENOTSUP);
+ return (*dev->dev_ops->mtu_get)(dev, mtu);
+}
+
+int
+rte_eth_dev_set_mtu(uint8_t port_id, uint16_t *mtu)
+{
+ struct rte_eth_dev *dev;
+
+ if (port_id >= nb_ports) {
+ PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+ return (-ENODEV);
+ }
+
+ dev = &rte_eth_devices[port_id];
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->mtu_set, -ENOTSUP);
+ return (*dev->dev_ops->mtu_set)(dev, mtu);
+}
+
int
rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
{
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 39351ea..177a6ec 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -890,6 +890,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @Check DD bit of specific RX descriptor */
+typedef int (*mtu_get_t)(struct rte_eth_dev *dev, uint16_t *mtu);
+/**< @internal Get MTU. */
+
+typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t *mtu);
+/**< @internal Set MTU. */
+
typedef int (*vlan_filter_set_t)(struct rte_eth_dev *dev,
uint16_t vlan_id,
int on);
@@ -1113,6 +1119,8 @@ struct eth_dev_ops {
eth_queue_stats_mapping_set_t queue_stats_mapping_set;
/**< Configure per queue stat counter mapping. */
eth_dev_infos_get_t dev_infos_get; /**< Get device info. */
+ mtu_get_t mtu_get; /**< Get MTU. */
+ mtu_set_t mtu_set; /**< Set MTU. */
vlan_filter_set_t vlan_filter_set; /**< Filter VLAN Setup. */
vlan_tpid_set_t vlan_tpid_set; /**< Outer VLAN TPID Setup. */
vlan_strip_queue_set_t vlan_strip_queue_set; /**< VLAN Stripping on queue. */
@@ -1680,6 +1688,35 @@ extern void rte_eth_dev_info_get(uint8_t port_id,
struct rte_eth_dev_info *dev_info);
/**
+ * Retrieve the MTU of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param mtu
+ * A pointer to a uint16_t where the retrieved MTU is to be stored.
+ * @return
+ * - (0) if successful.
+ * - (-ENOSUP) if operation is not supported.
+ * - (-ENODEV) if *port_id* invalid.
+ */
+extern int rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu);
+
+/**
+ * Change the MTU of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param mtu
+ * A pointer to a uint16_t where the MTU to be applied is stored.
+ * @return
+ * - (0) if successful.
+ * - (-ENOSUP) if operation is not supported.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EINVAL) if *mtu* invalid.
+ */
+extern int rte_eth_dev_set_mtu(uint8_t port_id, uint16_t *mtu);
+
+/**
* Enable/Disable hardware filtering by an Ethernet device of received
* VLAN packets tagged with a given VLAN Tag Identifier.
*
diff --git a/lib/librte_pmd_e1000/e1000_ethdev.h b/lib/librte_pmd_e1000/e1000_ethdev.h
index 8790601..5cbf436 100644
--- a/lib/librte_pmd_e1000/e1000_ethdev.h
+++ b/lib/librte_pmd_e1000/e1000_ethdev.h
@@ -138,6 +138,8 @@ uint16_t eth_igb_recv_pkts(void *rxq, struct rte_mbuf **rx_pkts,
uint16_t eth_igb_recv_scattered_pkts(void *rxq,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+void eth_igb_dev_set_rx_scatter_mode(struct rte_eth_dev *dev);
+
int eth_igb_rss_hash_update(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf);
@@ -192,4 +194,6 @@ uint16_t eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+void eth_em_set_rx_scatter_mode(struct rte_eth_dev *dev);
+
#endif /* _E1000_ETHDEV_H_ */
diff --git a/lib/librte_pmd_e1000/em_ethdev.c b/lib/librte_pmd_e1000/em_ethdev.c
index c148cbc..044d73f 100644
--- a/lib/librte_pmd_e1000/em_ethdev.c
+++ b/lib/librte_pmd_e1000/em_ethdev.c
@@ -94,6 +94,9 @@ static void em_hw_control_release(struct e1000_hw *hw);
static void em_init_manageability(struct e1000_hw *hw);
static void em_release_manageability(struct e1000_hw *hw);
+static int eth_em_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+static int eth_em_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+
static int eth_em_vlan_filter_set(struct rte_eth_dev *dev,
uint16_t vlan_id, int on);
static void eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask);
@@ -145,6 +148,8 @@ static struct eth_dev_ops eth_em_ops = {
.stats_get = eth_em_stats_get,
.stats_reset = eth_em_stats_reset,
.dev_infos_get = eth_em_infos_get,
+ .mtu_get = eth_em_get_mtu,
+ .mtu_set = eth_em_set_mtu,
.vlan_filter_set = eth_em_vlan_filter_set,
.vlan_offload_set = eth_em_vlan_offload_set,
.rx_queue_setup = eth_em_rx_queue_setup,
@@ -1487,6 +1492,65 @@ eth_em_rar_clear(struct rte_eth_dev *dev, uint32_t index)
e1000_rar_set(hw, addr, index);
}
+static int
+eth_em_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ *mtu = (uint16_t) dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ return 0;
+}
+
+static uint32_t
+em_get_rctl_buffer_size(uint32_t rctl)
+{
+ uint32_t rctl_buf_size;
+
+ static uint16_t std_buf_sizes[4] = {
+ 2048, 1024, 512, 256,
+ };
+ static uint16_t ext_buf_sizes[4] = {
+ 0 /* invalid */, 16384, 8192, 4096,
+ };
+
+ rctl_buf_size = ((rctl & 0x00030000) >> 16);
+ if (rctl & E1000_RCTL_BSEX)
+ return (uint32_t) ext_buf_sizes[rctl_buf_size];
+ else
+ return (uint32_t) std_buf_sizes[rctl_buf_size];
+}
+
+static int
+eth_em_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ struct e1000_hw *hw;
+ uint32_t frame_size;
+ uint32_t rx_buf_size;
+ uint32_t rctl;
+
+ frame_size = *mtu + ETHER_HDR_LEN + ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ /* check that mtu is within the allowed range */
+ if ((*mtu < 68) || (frame_size > em_get_max_pktlen(hw)))
+ return -EINVAL;
+
+ rctl = E1000_READ_REG(hw, E1000_RCTL);
+ rx_buf_size = em_get_rctl_buffer_size(rctl); /* set at init time. */
+ /* switch to jumbo mode if needed */
+ if (frame_size > rx_buf_size) {
+ eth_em_set_rx_scatter_mode(dev);
+ dev->data->dev_conf.rxmode.jumbo_frame = 1;
+ rctl |= E1000_RCTL_LPE;
+ } else {
+ dev->data->dev_conf.rxmode.jumbo_frame = 0;
+ rctl &= ~E1000_RCTL_LPE;
+ }
+ E1000_WRITE_REG(hw, E1000_RCTL, rctl);
+
+ /* update max frame size */
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ return 0;
+}
+
struct rte_driver em_pmd_drv = {
.type = PMD_PDEV,
.init = rte_em_pmd_init,
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 4f98a3f..5e7f74d 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -1063,6 +1063,17 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return (nb_rx);
}
+void
+eth_em_set_rx_scatter_mode(struct rte_eth_dev *dev)
+{
+ if (dev->rx_pkt_burst != eth_em_recv_scattered_pkts) {
+ dev->rx_pkt_burst = (eth_rx_burst_t)eth_em_recv_scattered_pkts;
+ dev->data->scattered_rx = 1;
+ /* make sure this setting is viewed by all cores */
+ rte_wmb();
+ }
+}
+
/*
* Rings setup and release.
*
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index e15fe5a..8f46963 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -87,6 +87,9 @@ static void igb_hw_control_release(struct e1000_hw *hw);
static void igb_init_manageability(struct e1000_hw *hw);
static void igb_release_manageability(struct e1000_hw *hw);
+static int eth_igb_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+static int eth_igb_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+
static int eth_igb_vlan_filter_set(struct rte_eth_dev *dev,
uint16_t vlan_id, int on);
static void eth_igb_vlan_tpid_set(struct rte_eth_dev *dev, uint16_t tpid_id);
@@ -180,6 +183,8 @@ static struct eth_dev_ops eth_igb_ops = {
.stats_get = eth_igb_stats_get,
.stats_reset = eth_igb_stats_reset,
.dev_infos_get = eth_igb_infos_get,
+ .mtu_get = eth_igb_get_mtu,
+ .mtu_set = eth_igb_set_mtu,
.vlan_filter_set = eth_igb_vlan_filter_set,
.vlan_tpid_set = eth_igb_vlan_tpid_set,
.vlan_offload_set = eth_igb_vlan_offload_set,
@@ -2238,6 +2243,65 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+eth_igb_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ uint32_t rlpml;
+ struct e1000_hw *hw;
+
+ hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ rlpml = E1000_READ_REG(hw, E1000_RLPML);
+
+ *mtu = (uint16_t) rlpml;
+ return 0;
+}
+
+static int
+eth_igb_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ uint32_t rctl;
+ struct e1000_hw *hw;
+ struct rte_eth_dev_info dev_info;
+ uint32_t frame_size = *mtu + ETHER_HDR_LEN +
+ ETHER_CRC_LEN + VLAN_TAG_SIZE;
+
+ hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+#ifdef RTE_LIBRTE_82571_SUPPORT
+ /* XXX: not bigger than max_rx_pktlen */
+ if (hw->mac.type == e1000_82571)
+ return -ENOTSUP;
+#endif
+ eth_igb_infos_get(dev, &dev_info);
+
+ /* check that mtu is within the allowed range */
+ if ((*mtu < 68) ||
+ (frame_size > dev_info.max_rx_pktlen))
+ return -EINVAL;
+
+ rctl = E1000_READ_REG(hw, E1000_RCTL);
+
+ /* switch to jumbo mode if needed */
+ if (frame_size > ETHER_MAX_LEN) {
+ eth_igb_dev_set_rx_scatter_mode(dev);
+ dev->data->dev_conf.rxmode.jumbo_frame = 1;
+ rctl |= E1000_RCTL_LPE;
+ } else {
+ dev->data->dev_conf.rxmode.jumbo_frame = 0;
+ rctl &= ~E1000_RCTL_LPE;
+ }
+ E1000_WRITE_REG(hw, E1000_RCTL, rctl);
+
+ /* update max frame size */
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+ return 0;
+}
+
static struct rte_driver pmd_igb_drv = {
.type = PMD_PDEV,
.init = rte_igb_pmd_init,
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 6b48df5..61ec0ff 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -2155,6 +2155,16 @@ eth_igb_tx_init(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_TCTL, tctl);
}
+void eth_igb_dev_set_rx_scatter_mode(struct rte_eth_dev *dev)
+{
+ if (dev->rx_pkt_burst != eth_igb_recv_scattered_pkts) {
+ dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
+ /* we must ensure that this is done when we leave the
+ function */
+ rte_wmb();
+ }
+}
+
/*********************************************************************
*
* Enable VF receive unit.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index c876c3e..b9db1f4 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -114,6 +114,10 @@ static int ixgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
uint8_t is_rx);
static void ixgbe_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+
+static int ixgbe_dev_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+static int ixgbe_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+
static int ixgbe_vlan_filter_set(struct rte_eth_dev *dev,
uint16_t vlan_id, int on);
static void ixgbe_vlan_tpid_set(struct rte_eth_dev *dev, uint16_t tpid_id);
@@ -264,6 +268,8 @@ static struct eth_dev_ops ixgbe_eth_dev_ops = {
.stats_reset = ixgbe_dev_stats_reset,
.queue_stats_mapping_set = ixgbe_dev_queue_stats_mapping_set,
.dev_infos_get = ixgbe_dev_info_get,
+ .mtu_get = ixgbe_dev_get_mtu,
+ .mtu_set = ixgbe_dev_set_mtu,
.vlan_filter_set = ixgbe_vlan_filter_set,
.vlan_tpid_set = ixgbe_vlan_tpid_set,
.vlan_offload_set = ixgbe_vlan_offload_set,
@@ -2590,6 +2596,62 @@ ixgbe_remove_rar(struct rte_eth_dev *dev, uint32_t index)
ixgbe_clear_rar(hw, index);
}
+static int
+ixgbe_dev_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ uint32_t maxfrs;
+ struct ixgbe_hw *hw;
+
+ hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
+ *mtu = (uint16_t) (((0xFFFF0000 & maxfrs) >> 16) -
+ ETHER_HDR_LEN - ETHER_CRC_LEN);
+ return 0;
+}
+
+static int
+ixgbe_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ uint32_t hlreg0;
+ uint32_t maxfrs;
+ struct ixgbe_hw *hw;
+ struct rte_eth_dev_info dev_info;
+ uint32_t frame_size = *mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+ ixgbe_dev_info_get(dev, &dev_info);
+
+ /* check that mtu is within the allowed range */
+ if ((*mtu < 68) ||
+ (frame_size > dev_info.max_rx_pktlen))
+ return -EINVAL;
+
+ hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+
+ /* switch to jumbo mode if needed */
+ if (frame_size > ETHER_MAX_LEN) {
+ ixgbe_dev_set_rx_scatter_mode(dev);
+ dev->data->dev_conf.rxmode.jumbo_frame = 1;
+ hlreg0 |= IXGBE_HLREG0_JUMBOEN;
+ } else {
+ dev->data->dev_conf.rxmode.jumbo_frame = 0;
+ hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
+ }
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
+
+ /* update max frame size */
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
+ maxfrs &= 0x0000FFFF;
+ maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
+
+ return 0;
+}
+
/*
* Virtual Function operations
*/
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
index 846db0a..a667ec2 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
@@ -235,6 +235,8 @@ uint16_t ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+void ixgbe_dev_set_rx_scatter_mode(struct rte_eth_dev *dev);
+
int ixgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 2d2c1a0..b05c1ba 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3892,3 +3892,13 @@ ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev)
}
}
+
+void ixgbe_dev_set_rx_scatter_mode(struct rte_eth_dev *dev)
+{
+ if (dev->rx_pkt_burst != ixgbe_recv_scattered_pkts) {
+ dev->rx_pkt_burst = ixgbe_recv_scattered_pkts;
+ /* we must ensure that this is done when we leave the
+ function */
+ rte_wmb();
+ }
+}
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 4/5] ixgbe: add get/set_mtu to ixgbevf
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
` (2 preceding siblings ...)
2014-05-26 11:31 ` [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors David Marchand
@ 2014-05-26 11:31 ` David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 5/5] app/testpmd: allow to configure mtu David Marchand
4 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev
From: Ivan Boule <ivan.boule@6wind.com>
The support of jumbo frames in the ixgbevf Poll Mode Driver of 10GbE
82599 VF functions consists in the following enhancements:
- Implement the mtu_set function in the ixgbevf PMD, using the IXGBE_VF_SET_LPE
request of the version 1.0 of the VF/PF mailbox API for this purpose.
- Implement the mtu_get function in the ixgbevf PMD for the sake of coherency.
- Add a detailed explanation on the VF/PF rx max frame len negotiation.
- To deal with Jumbo frames, force the receive function to handle scattered
packets.
Signed-off-by: Ivan Boule <ivan.boule@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 45 +++++++++++++++++++++++++++++++++++
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +++++++++++++-
2 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index b9db1f4..230b758 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -194,6 +194,9 @@ static void ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
uint32_t index, uint32_t pool);
static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int ixgbevf_dev_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+static int ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu);
+
/*
* Define VF Stats MACRO for Non "cleared on read" register
*/
@@ -334,6 +337,8 @@ static struct eth_dev_ops ixgbevf_eth_dev_ops = {
.stats_reset = ixgbevf_dev_stats_reset,
.dev_close = ixgbevf_dev_close,
.dev_infos_get = ixgbe_dev_info_get,
+ .mtu_get = ixgbevf_dev_get_mtu,
+ .mtu_set = ixgbevf_dev_set_mtu,
.vlan_filter_set = ixgbevf_vlan_filter_set,
.vlan_strip_queue_set = ixgbevf_vlan_strip_queue_set,
.vlan_offload_set = ixgbevf_vlan_offload_set,
@@ -3319,6 +3324,46 @@ ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index)
}
}
+static int
+ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ struct ixgbe_hw *hw;
+ uint16_t max_frame = *mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
+
+ hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ /* MTU < 68 is an error and causes problems on some kernels */
+ if ((*mtu < 68) || (max_frame > ETHER_MAX_JUMBO_FRAME_LEN))
+ return -EINVAL;
+
+ /*
+ * When supported by the underlying PF driver, use the IXGBE_VF_SET_MTU
+ * request of the version 2.0 of the mailbox API.
+ * For now, use the IXGBE_VF_SET_LPE request of the version 1.0
+ * of the mailbox API.
+ * This call to IXGBE_SET_LPE action won't work with ixgbe pf drivers
+ * prior to 3.11.33 which contains the following change:
+ * "ixgbe: Enable jumbo frames support w/ SR-IOV"
+ */
+ ixgbevf_rlpml_set_vf(hw, max_frame);
+
+ /* update max frame size */
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
+ return 0;
+}
+
+static int
+ixgbevf_dev_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
+{
+ /*
+ * When available, use the IXGBE_VF_GET_MTU request
+ * of the version 2.0 of the mailbox API.
+ */
+ *mtu = (uint16_t) (dev->data->dev_conf.rxmode.max_rx_pkt_len -
+ (ETHER_HDR_LEN + ETHER_CRC_LEN));
+ return 0;
+}
+
static struct rte_driver rte_ixgbe_driver = {
.type = PMD_PDEV,
.init = rte_ixgbe_pmd_init,
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index b05c1ba..c797616 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -3700,7 +3700,20 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* setup MTU */
+ /*
+ * When the VF driver issues a IXGBE_VF_RESET request, the PF driver
+ * disables the VF receipt of packets if the PF MTU is > 1500.
+ * This is done to deal with 82599 limitations that imposes
+ * the PF and all VFs to share the same MTU.
+ * Then, the PF driver enables again the VF receipt of packet when
+ * the VF driver issues a IXGBE_VF_SET_LPE request.
+ * In the meantime, the VF device cannot be used, even if the VF driver
+ * and the Guest VM network stack are ready to accept packets with a
+ * size up to the PF MTU.
+ * As a work-around to this PF behaviour, force the call to
+ * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
+ * VF packets received can work in all cases.
+ */
ixgbevf_rlpml_set_vf(hw,
(uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len);
@@ -3783,6 +3796,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
}
}
+ /* Set RX function to deal with Jumbo frames */
+ ixgbe_dev_set_rx_scatter_mode(dev);
+
return 0;
}
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 5/5] app/testpmd: allow to configure mtu
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
` (3 preceding siblings ...)
2014-05-26 11:31 ` [dpdk-dev] [PATCH 4/5] ixgbe: add get/set_mtu to ixgbevf David Marchand
@ 2014-05-26 11:31 ` David Marchand
4 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2014-05-26 11:31 UTC (permalink / raw)
To: dev
From: Ivan Boule <ivan.boule@6wind.com>
Take avantage of the .set_mtu ethdev function and make it possible to configure
MTU on devices using testpmd.
Signed-off-by: Ivan Boule <ivan.boule@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
---
app/test-pmd/cmdline.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++
app/test-pmd/config.c | 13 ++++++++++++
app/test-pmd/testpmd.h | 2 +-
3 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 68bcd7b..517944b 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -516,6 +516,8 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config all (txfreet|txrst|rxfreet) (value)\n"
" Set free threshold for rx/tx, or set"
" tx rs bit threshold.\n\n"
+ "port config mtu X value\n"
+ " Set the MTU of port X to a given value\n\n"
);
}
@@ -1014,6 +1016,57 @@ cmdline_parse_inst_t cmd_config_max_pkt_len = {
},
};
+/* *** configure port MTU *** */
+struct cmd_config_mtu_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t mtu;
+ uint8_t port_id;
+ uint16_t value;
+};
+
+static void
+cmd_config_mtu_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_mtu_result *res = parsed_result;
+
+ if (res->value < ETHER_MIN_LEN) {
+ printf("mtu cannot be less than %d\n", ETHER_MIN_LEN);
+ return;
+ }
+ port_mtu_set(res->port_id, res->value);
+}
+
+cmdline_parse_token_string_t cmd_config_mtu_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_mtu_result, port,
+ "port");
+cmdline_parse_token_string_t cmd_config_mtu_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_mtu_result, keyword,
+ "config");
+cmdline_parse_token_string_t cmd_config_mtu_mtu =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_mtu_result, keyword,
+ "mtu");
+cmdline_parse_token_num_t cmd_config_mtu_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, port_id, UINT8);
+cmdline_parse_token_num_t cmd_config_mtu_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, value, UINT16);
+
+cmdline_parse_inst_t cmd_config_mtu = {
+ .f = cmd_config_mtu_parsed,
+ .data = NULL,
+ .help_str = "port config mtu value",
+ .tokens = {
+ (void *)&cmd_config_mtu_port,
+ (void *)&cmd_config_mtu_keyword,
+ (void *)&cmd_config_mtu_mtu,
+ (void *)&cmd_config_mtu_port_id,
+ (void *)&cmd_config_mtu_value,
+ NULL,
+ },
+};
+
/* *** configure rx mode *** */
struct cmd_config_rx_mode_flag {
cmdline_fixed_string_t port;
@@ -5366,6 +5419,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_config_speed_all,
(cmdline_parse_inst_t *)&cmd_config_speed_specific,
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
+ (cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d3934e5..88a808c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -503,6 +503,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
display_port_reg_value(port_id, reg_off, reg_v);
}
+void
+port_mtu_set(portid_t port_id, uint16_t mtu)
+{
+ int diag;
+
+ if (port_id_is_invalid(port_id))
+ return;
+ diag = rte_eth_dev_set_mtu(port_id, &mtu);
+ if (diag == 0)
+ return;
+ printf("Set MTU failed. diag=%d\n", diag);
+}
+
/*
* RX/TX ring descriptors display functions.
*/
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 0cf5a92..9b7dcbb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -454,7 +454,7 @@ void fwd_config_setup(void);
void set_def_fwd_config(void);
int init_fwd_streams(void);
-
+void port_mtu_set(portid_t port_id, uint16_t mtu);
void port_reg_bit_display(portid_t port_id, uint32_t reg_off, uint8_t bit_pos);
void port_reg_bit_set(portid_t port_id, uint32_t reg_off, uint8_t bit_pos,
uint8_t bit_v);
--
1.7.10.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors
2014-05-26 11:31 ` [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors David Marchand
@ 2014-06-10 17:23 ` Ananyev, Konstantin
2014-06-12 14:20 ` David Marchand
0 siblings, 1 reply; 8+ messages in thread
From: Ananyev, Konstantin @ 2014-06-10 17:23 UTC (permalink / raw)
To: David Marchand, dev
Hi David,
>This patch adds two new functions in ethdev api to retrieve current MTU and
>change MTU of a port.
>These operations have been implemented for rte_em_pmd, rte_igb_pmd and
>rte_ixgbe_pmd.
>+
>+void ixgbe_dev_set_rx_scatter_mode(struct rte_eth_dev *dev)
>+{
>+ if (dev->rx_pkt_burst != ixgbe_recv_scattered_pkts) {
>+ dev->rx_pkt_burst = ixgbe_recv_scattered_pkts;
>+ /* we must ensure that this is done when we leave the
>+ function */
>+ rte_wmb();
>+ }
>+}
I don't think it is safe to change RX function on the fly (without calling dev_stop first).
If before that ixgbe_recv_pkts_bulk_alloc() was used, then up to 32 mbufs could be stored internally in the stage[],
and ixgbe_recv_scattered_pkts() doesn't have a clue about them.
Also with ixgbe_recv_pkts_bulk_alloc(), it could be up to rx_free_thresh RX descriptors not armed yet with proper buffer addresses,
while ixgbe_recv_scattered_pkts() doesn't expect that - it re-arms RXD straight after it receives a packet from it.
I wonder is the ability to change mtu on the fly (without dev_stop) is really needed?
If so, then we probably can allow ixgbe_dev_set_mtu() to increase MTU only if
new frame size is less than RX buffer size OR that device already using ixgbe_recv_scattered_pkts().
Something like:
frame_size <= rx_buf_size || dev->rx_pkt_burst == ixgbe_recv_scattered_pkts
> +static int
>+ixgbe_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
>+{
>....
>+
>+ /* switch to jumbo mode if needed */
>+ if (frame_size > ETHER_MAX_LEN) {
Why ETHER_MAX_LEN?
If we have rx_buffer big enough, we don't need to split packets even for jumbo frames.
Shouldn't it be something like inside ixgbe_dev_rx_init():
If (mtu + 2 * IXGBE_VLAN_TAG_SIZE) > rx_buf_size) {...
Same thing for igb.
Konstantin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors
2014-06-10 17:23 ` Ananyev, Konstantin
@ 2014-06-12 14:20 ` David Marchand
0 siblings, 0 replies; 8+ messages in thread
From: David Marchand @ 2014-06-12 14:20 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Hello Konstantin,
Mmm, good points.
I am looking into them.
I will send new patches by tomorrow (hopefully).
On 06/10/2014 07:23 PM, Ananyev, Konstantin wrote:
> Hi David,
>
>> This patch adds two new functions in ethdev api to retrieve current MTU and
>> change MTU of a port.
>> These operations have been implemented for rte_em_pmd, rte_igb_pmd and
>> rte_ixgbe_pmd.
>
>> +
>> +void ixgbe_dev_set_rx_scatter_mode(struct rte_eth_dev *dev)
>> +{
>> + if (dev->rx_pkt_burst != ixgbe_recv_scattered_pkts) {
>> + dev->rx_pkt_burst = ixgbe_recv_scattered_pkts;
>> + /* we must ensure that this is done when we leave the
>> + function */
>> + rte_wmb();
>> + }
>> +}
>
> I don't think it is safe to change RX function on the fly (without calling dev_stop first).
> If before that ixgbe_recv_pkts_bulk_alloc() was used, then up to 32 mbufs could be stored internally in the stage[],
> and ixgbe_recv_scattered_pkts() doesn't have a clue about them.
> Also with ixgbe_recv_pkts_bulk_alloc(), it could be up to rx_free_thresh RX descriptors not armed yet with proper buffer addresses,
> while ixgbe_recv_scattered_pkts() doesn't expect that - it re-arms RXD straight after it receives a packet from it.
>
> I wonder is the ability to change mtu on the fly (without dev_stop) is really needed?
> If so, then we probably can allow ixgbe_dev_set_mtu() to increase MTU only if
> new frame size is less than RX buffer size OR that device already using ixgbe_recv_scattered_pkts().
> Something like:
> frame_size <= rx_buf_size || dev->rx_pkt_burst == ixgbe_recv_scattered_pkts
>
>
>> +static int
>> +ixgbe_dev_set_mtu(struct rte_eth_dev *dev, uint16_t *mtu)
>> +{
>> ....
>> +
>> + /* switch to jumbo mode if needed */
>> + if (frame_size > ETHER_MAX_LEN) {
>
> Why ETHER_MAX_LEN?
> If we have rx_buffer big enough, we don't need to split packets even for jumbo frames.
> Shouldn't it be something like inside ixgbe_dev_rx_init():
> If (mtu + 2 * IXGBE_VLAN_TAG_SIZE) > rx_buf_size) {...
>
> Same thing for igb.
>
--
David Marchand
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-06-12 14:20 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-26 11:31 [dpdk-dev] [PATCH 0/5] add mtu and flow control handlers David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 1/5] ethdev: retrieve flow control configuration David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 2/5] ethdev: add autoneg parameter in flow ctrl accessors David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 3/5] ethdev: add mtu accessors David Marchand
2014-06-10 17:23 ` Ananyev, Konstantin
2014-06-12 14:20 ` David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 4/5] ixgbe: add get/set_mtu to ixgbevf David Marchand
2014-05-26 11:31 ` [dpdk-dev] [PATCH 5/5] app/testpmd: allow to configure mtu David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).