* [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information
@ 2015-10-22 12:06 Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (8 more replies)
0 siblings, 9 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Add the ability for the upper layer to query:
1) configured RX/TX queue information.
2) information about RX/TX descriptors min/max/align
numbers per queue for the device.
v2 changes:
- Add formal check for the qinfo input parameter.
- As suggested rename 'rx_qinfo/tx_qinfo' to 'rxq_info/txq_info'
v3 changes:
- Updated rte_ether_version.map
- Merged with latest changes
v4 changes:
- rte_ether_version.map: move new functions into DPDK_2.1 sub-space.
v5 changes:
- adressed previous code-review comments
- rte_ether_version.map: move new functions into DPDK_2.2 sub-space.
- added new fields into rte_eth_dev_info
v6 chages:
- respin to comply with latest dpdk.org
- update release_notes
Konstantin Ananyev (9):
ethdev: add new API to retrieve RX/TX queue information
i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
testpmd: add new command to display RX/TX queue information
fm10k: add HW specific desc_lim data into dev_info
cxgbe: add HW specific desc_lim data into dev_info
vmxnet3: add HW specific desc_lim data into dev_info
doc: release notes update for queue_info_get()
app/test-pmd/cmdline.c | 48 +++++++++++++++++++
app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 2 +
doc/guides/rel_notes/release_2_2.rst | 7 +++
drivers/net/cxgbe/cxgbe_ethdev.c | 9 ++++
drivers/net/e1000/e1000_ethdev.h | 36 ++++++++++++++
drivers/net/e1000/em_ethdev.c | 14 ++++++
drivers/net/e1000/em_rxtx.c | 71 ++++++++++++++++------------
drivers/net/e1000/igb_ethdev.c | 22 +++++++++
drivers/net/e1000/igb_rxtx.c | 66 +++++++++++++++++---------
drivers/net/fm10k/fm10k_ethdev.c | 11 +++++
drivers/net/i40e/i40e_ethdev.c | 14 ++++++
drivers/net/i40e/i40e_ethdev.h | 5 ++
drivers/net/i40e/i40e_ethdev_vf.c | 12 +++++
drivers/net/i40e/i40e_rxtx.c | 37 +++++++++++++++
drivers/net/ixgbe/ixgbe_ethdev.c | 23 +++++++++
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +++
drivers/net/ixgbe/ixgbe_rxtx.c | 68 +++++++++++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 21 +++++++++
drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 +++++
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
23 files changed, 642 insertions(+), 80 deletions(-)
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 1/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
` (9 more replies)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
` (7 subsequent siblings)
8 siblings, 10 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Add the ability for the upper layer to query RX/TX queue information.
Add into rte_eth_dev_info new fields to represent information about
RX/TX descriptors min/max/alig nnumbers per queue for the device.
Add new structures:
struct rte_eth_rxq_info
struct rte_eth_txq_info
new functions:
rte_eth_rx_queue_info_get
rte_eth_tx_queue_info_get
into rte_etdev API.
Left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage.
Add new fields:
rx_desc_lim
tx_desc_lim
into rte_eth_dev_info.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
3 files changed, 159 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..d18ecb5 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1447,6 +1447,19 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
+ nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
+ nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
+
+ PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+ "should be: <= %hu, = %hu, and a product of %hu\n",
+ nb_rx_desc,
+ dev_info.rx_desc_lim.nb_max,
+ dev_info.rx_desc_lim.nb_min,
+ dev_info.rx_desc_lim.nb_align);
+ return -EINVAL;
+ }
+
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
@@ -1786,11 +1799,18 @@ void
rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
+ const struct rte_eth_desc_lim lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ };
VALID_PORTID_OR_RET(port_id);
dev = &rte_eth_devices[port_id];
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
+ dev_info->rx_desc_lim = lim;
+ dev_info->tx_desc_lim = lim;
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -3221,6 +3241,54 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
}
int
+rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
+rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
rte_eth_dev_set_mc_addr_list(uint8_t port_id,
struct ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..4d7b6f2 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -653,6 +653,15 @@ struct rte_eth_txconf {
};
/**
+ * A structure contains information about HW descriptor ring limitations.
+ */
+struct rte_eth_desc_lim {
+ uint16_t nb_max; /**< Max allowed number of descriptors. */
+ uint16_t nb_min; /**< Min allowed number of descriptors. */
+ uint16_t nb_align; /**< Number of descriptors should be aligned to. */
+};
+
+/**
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
@@ -837,6 +846,8 @@ struct rte_eth_dev_info {
uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
+ struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
+ struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
};
/** Maximum name length for extended statistics counters */
@@ -854,6 +865,26 @@ struct rte_eth_xstats {
uint64_t value;
};
+/**
+ * Ethernet device RX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_rxq_info {
+ struct rte_mempool *mp; /**< mempool used by that queue. */
+ struct rte_eth_rxconf conf; /**< queue config parameters. */
+ uint8_t scattered_rx; /**< scattered packets RX supported. */
+ uint16_t nb_desc; /**< configured number of RXDs. */
+} __rte_cache_aligned;
+
+/**
+ * Ethernet device TX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_txq_info {
+ struct rte_eth_txconf conf; /**< queue config parameters. */
+ uint16_t nb_desc; /**< configured number of TXDs. */
+} __rte_cache_aligned;
+
struct rte_eth_dev;
struct rte_eth_dev_callback;
@@ -965,6 +996,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo);
+
+typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+
typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t mtu);
/**< @internal Set MTU. */
@@ -1301,9 +1338,13 @@ struct eth_dev_ops {
rss_hash_update_t rss_hash_update;
/** Get current RSS hash configuration. */
rss_hash_conf_get_t rss_hash_conf_get;
- eth_filter_ctrl_t filter_ctrl; /**< common filter control*/
+ eth_filter_ctrl_t filter_ctrl;
+ /**< common filter control. */
eth_set_mc_addr_list_t set_mc_addr_list; /**< set list of mcast addrs */
-
+ eth_rxq_info_get_t rxq_info_get;
+ /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get;
+ /**< retrieve TX queue information. */
/** Turn IEEE1588/802.1AS timestamping on. */
eth_timesync_enable_t timesync_enable;
/** Turn IEEE1588/802.1AS timestamping off. */
@@ -3441,6 +3482,46 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
struct rte_eth_rxtx_callback *user_cb);
/**
+ * Retrieve information about given port's RX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The RX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+/**
+ * Retrieve information about given port's TX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The TX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+/*
* Retrieve number of available registers for access
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..1fb4b87 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,11 @@ DPDK_2.1 {
rte_eth_timesync_read_tx_timestamp;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_eth_rx_queue_info_get;
+ rte_eth_tx_queue_info_get;
+
+} DPDK_2.1;
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 3/9] ixgbe: " Konstantin Ananyev
` (6 subsequent siblings)
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
This patch assumes that the patch:
i40e: fix wrong alignment for the number of HW descriptors
already applied.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 14 ++++++++++++++
drivers/net/i40e/i40e_ethdev.h | 5 +++++
drivers/net/i40e/i40e_ethdev_vf.c | 12 ++++++++++++
drivers/net/i40e/i40e_rxtx.c | 37 +++++++++++++++++++++++++++++++++++++
4 files changed, 68 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..cbc1985 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -283,6 +283,8 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
.udp_tunnel_add = i40e_dev_udp_tunnel_add,
.udp_tunnel_del = i40e_dev_udp_tunnel_del,
.filter_ctrl = i40e_dev_filter_ctrl,
+ .rxq_info_get = i40e_rxq_info_get,
+ .txq_info_get = i40e_txq_info_get,
.mirror_rule_set = i40e_mirror_rule_set,
.mirror_rule_reset = i40e_mirror_rule_reset,
.timesync_enable = i40e_timesync_enable,
@@ -1674,6 +1676,18 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
if (pf->flags & I40E_FLAG_VMDQ) {
dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
dev_info->vmdq_queue_base = dev_info->max_rx_queues;
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..4748392 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -502,6 +502,11 @@ int i40e_fdir_ctrl_func(struct rte_eth_dev *dev,
enum rte_filter_op filter_op,
void *arg);
+void i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
/* I40E_DEV_PRIVATE_TO */
#define I40E_DEV_PRIVATE_TO_PF(adapter) \
(&((struct i40e_adapter *)adapter)->pf)
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index b694400..5dad12d 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1756,6 +1756,18 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
}
static void
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 260e580..fa1451e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -3063,3 +3063,40 @@ i40e_fdir_setup_rx_resources(struct i40e_pf *pf)
return I40E_SUCCESS;
}
+
+void
+i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct i40e_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mp;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct i40e_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.txq_flags = txq->txq_flags;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 3/9] ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 4/9] e1000: " Konstantin Ananyev
` (5 subsequent siblings)
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 23 ++++++++++++++
drivers/net/ixgbe/ixgbe_ethdev.h | 6 ++++
drivers/net/ixgbe/ixgbe_rxtx.c | 68 +++++++++++++++++++++++++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 21 +++++++++++++
4 files changed, 93 insertions(+), 25 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ec2918c..4769bb0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -386,6 +386,18 @@ static const struct rte_pci_id pci_id_ixgbevf_map[] = {
};
+static const struct rte_eth_desc_lim rx_desc_lim = {
+ .nb_max = IXGBE_MAX_RING_DESC,
+ .nb_min = IXGBE_MIN_RING_DESC,
+ .nb_align = IXGBE_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+ .nb_max = IXGBE_MAX_RING_DESC,
+ .nb_min = IXGBE_MIN_RING_DESC,
+ .nb_align = IXGBE_TXD_ALIGN,
+};
+
static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.dev_configure = ixgbe_dev_configure,
.dev_start = ixgbe_dev_start,
@@ -456,6 +468,8 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.rss_hash_conf_get = ixgbe_dev_rss_hash_conf_get,
.filter_ctrl = ixgbe_dev_filter_ctrl,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
+ .rxq_info_get = ixgbe_rxq_info_get,
+ .txq_info_get = ixgbe_txq_info_get,
.timesync_enable = ixgbe_timesync_enable,
.timesync_disable = ixgbe_timesync_disable,
.timesync_read_rx_timestamp = ixgbe_timesync_read_rx_timestamp,
@@ -494,6 +508,8 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
.mac_addr_add = ixgbevf_add_mac_addr,
.mac_addr_remove = ixgbevf_remove_mac_addr,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
+ .rxq_info_get = ixgbe_rxq_info_get,
+ .txq_info_get = ixgbe_txq_info_get,
.mac_addr_set = ixgbevf_set_default_mac_addr,
.get_reg_length = ixgbevf_get_reg_length,
.get_reg = ixgbevf_get_regs,
@@ -2396,6 +2412,10 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
+
dev_info->hash_key_size = IXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
@@ -2449,6 +2469,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c3d4f4f..d16f476 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -351,6 +351,12 @@ int ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
int ixgbevf_dev_rx_init(struct rte_eth_dev *dev);
void ixgbevf_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a598a72..ba08588 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -1821,25 +1821,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. This will
- * also optimize cache line size effect. H/W supports up to cache line size 128.
- */
-#define IXGBE_ALIGN 128
-
-/*
- * Maximum number of Ring Descriptors.
- *
- * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
- * descriptors should meet the following condition:
- * (num_ring_desc * sizeof(rx/tx descriptor)) % 128 == 0
- */
-#define IXGBE_MIN_RING_DESC 32
-#define IXGBE_MAX_RING_DESC 4096
-
-/*
* Create memzone for HW rings. malloc can't be used as the physical address is
* needed. If the memzone is already created, then this function returns a ptr
* to the old one.
@@ -2007,9 +1988,9 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* It must not exceed hardware maximum, and must be multiple
* of IXGBE_ALIGN.
*/
- if (((nb_desc * sizeof(union ixgbe_adv_tx_desc)) % IXGBE_ALIGN) != 0 ||
- (nb_desc > IXGBE_MAX_RING_DESC) ||
- (nb_desc < IXGBE_MIN_RING_DESC)) {
+ if (nb_desc % IXGBE_TXD_ALIGN != 0 ||
+ (nb_desc > IXGBE_MAX_RING_DESC) ||
+ (nb_desc < IXGBE_MIN_RING_DESC)) {
return -EINVAL;
}
@@ -2374,9 +2355,9 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
* It must not exceed hardware maximum, and must be multiple
* of IXGBE_ALIGN.
*/
- if (((nb_desc * sizeof(union ixgbe_adv_rx_desc)) % IXGBE_ALIGN) != 0 ||
- (nb_desc > IXGBE_MAX_RING_DESC) ||
- (nb_desc < IXGBE_MIN_RING_DESC)) {
+ if (nb_desc % IXGBE_RXD_ALIGN != 0 ||
+ (nb_desc > IXGBE_MAX_RING_DESC) ||
+ (nb_desc < IXGBE_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -4649,6 +4630,43 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
+void
+ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct ixgbe_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct ixgbe_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.txq_flags = txq->txq_flags;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
/*
* [VF] Initializes Receive Unit.
*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index b9eca67..475a800 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -34,6 +34,27 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+/*
+ * Rings setup and release.
+ *
+ * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
+ * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. This will
+ * also optimize cache line size effect. H/W supports up to cache line size 128.
+ */
+#define IXGBE_ALIGN 128
+
+#define IXGBE_RXD_ALIGN (IXGBE_ALIGN / sizeof(union ixgbe_adv_rx_desc))
+#define IXGBE_TXD_ALIGN (IXGBE_ALIGN / sizeof(union ixgbe_adv_tx_desc))
+
+/*
+ * Maximum number of Ring Descriptors.
+ *
+ * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
+ * descriptors should meet the following condition:
+ * (num_ring_desc * sizeof(rx/tx descriptor)) % 128 == 0
+ */
+#define IXGBE_MIN_RING_DESC 32
+#define IXGBE_MAX_RING_DESC 4096
#define RTE_PMD_IXGBE_TX_MAX_BURST 32
#define RTE_PMD_IXGBE_RX_MAX_BURST 32
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 4/9] e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (2 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 3/9] ixgbe: " Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
` (4 subsequent siblings)
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/e1000/e1000_ethdev.h | 36 ++++++++++++++++++++
drivers/net/e1000/em_ethdev.c | 14 ++++++++
drivers/net/e1000/em_rxtx.c | 71 +++++++++++++++++++++++-----------------
drivers/net/e1000/igb_ethdev.c | 22 +++++++++++++
drivers/net/e1000/igb_rxtx.c | 66 ++++++++++++++++++++++++-------------
5 files changed, 156 insertions(+), 53 deletions(-)
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 4e69e44..3c6f613 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -108,6 +108,30 @@
ETH_RSS_IPV6_TCP_EX | \
ETH_RSS_IPV6_UDP_EX)
+/*
+ * Maximum number of Ring Descriptors.
+ *
+ * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
+ * desscriptors should meet the following condition:
+ * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
+ */
+#define E1000_MIN_RING_DESC 32
+#define E1000_MAX_RING_DESC 4096
+
+/*
+ * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
+ * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
+ * This will also optimize cache line size effect.
+ * H/W supports up to cache line size 128.
+ */
+#define E1000_ALIGN 128
+
+#define IGB_RXD_ALIGN (E1000_ALIGN / sizeof(union e1000_adv_rx_desc))
+#define IGB_TXD_ALIGN (E1000_ALIGN / sizeof(union e1000_adv_tx_desc))
+
+#define EM_RXD_ALIGN (E1000_ALIGN / sizeof(struct e1000_rx_desc))
+#define EM_TXD_ALIGN (E1000_ALIGN / sizeof(struct e1000_data_desc))
+
/* structure for interrupt relative data */
struct e1000_interrupt {
uint32_t flags;
@@ -307,6 +331,12 @@ void igb_pf_mbx_process(struct rte_eth_dev *eth_dev);
int igb_pf_host_configure(struct rte_eth_dev *eth_dev);
+void igb_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void igb_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
/*
* RX/TX EM function prototypes
*/
@@ -343,6 +373,12 @@ uint16_t eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+void em_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void em_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
void igb_pf_host_uninit(struct rte_eth_dev *dev);
#endif /* _E1000_ETHDEV_H_ */
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 912f5dd..0cbc228 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -166,6 +166,8 @@ static const struct eth_dev_ops eth_em_ops = {
.mac_addr_add = eth_em_rar_set,
.mac_addr_remove = eth_em_rar_clear,
.set_mc_addr_list = eth_em_set_mc_addr_list,
+ .rxq_info_get = em_rxq_info_get,
+ .txq_info_get = em_txq_info_get,
};
/**
@@ -933,6 +935,18 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = EM_RXD_ALIGN,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = EM_TXD_ALIGN,
+ };
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 3b8776d..03e1bc2 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1081,26 +1081,6 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return (nb_rx);
}
-/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
- * This will also optimize cache line size effect.
- * H/W supports up to cache line size 128.
- */
-#define EM_ALIGN 128
-
-/*
- * Maximum number of Ring Descriptors.
- *
- * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
- * desscriptors should meet the following condition:
- * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
- */
-#define EM_MIN_RING_DESC 32
-#define EM_MAX_RING_DESC 4096
-
#define EM_MAX_BUF_SIZE 16384
#define EM_RCTL_FLXBUF_STEP 1024
@@ -1210,11 +1190,11 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of transmit descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of EM_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(*txq->tx_ring)) % EM_ALIGN) != 0 ||
- (nb_desc > EM_MAX_RING_DESC) ||
- (nb_desc < EM_MIN_RING_DESC)) {
+ if (nb_desc % EM_TXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return -(EINVAL);
}
@@ -1272,7 +1252,7 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tsize = sizeof (txq->tx_ring[0]) * EM_MAX_RING_DESC;
+ tsize = sizeof(txq->tx_ring[0]) * E1000_MAX_RING_DESC;
if ((tz = ring_dma_zone_reserve(dev, "tx_ring", queue_idx, tsize,
socket_id)) == NULL)
return (-ENOMEM);
@@ -1375,11 +1355,11 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of receive descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of EM_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(rxq->rx_ring[0])) % EM_ALIGN) != 0 ||
- (nb_desc > EM_MAX_RING_DESC) ||
- (nb_desc < EM_MIN_RING_DESC)) {
+ if (nb_desc % EM_RXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -1399,7 +1379,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
}
/* Allocate RX ring for max possible mumber of hardware descriptors. */
- rsize = sizeof (rxq->rx_ring[0]) * EM_MAX_RING_DESC;
+ rsize = sizeof(rxq->rx_ring[0]) * E1000_MAX_RING_DESC;
if ((rz = ring_dma_zone_reserve(dev, "rx_ring", queue_idx, rsize,
socket_id)) == NULL)
return (-ENOMEM);
@@ -1881,3 +1861,34 @@ eth_em_tx_init(struct rte_eth_dev *dev)
/* This write will effectively turn on the transmit unit. */
E1000_WRITE_REG(hw, E1000_TCTL, tctl);
}
+
+void
+em_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct em_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+}
+
+void
+em_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct em_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 848ef6e..73c067e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -281,6 +281,18 @@ static const struct rte_pci_id pci_id_igbvf_map[] = {
{0},
};
+static const struct rte_eth_desc_lim rx_desc_lim = {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = IGB_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = IGB_RXD_ALIGN,
+};
+
static const struct eth_dev_ops eth_igb_ops = {
.dev_configure = eth_igb_configure,
.dev_start = eth_igb_start,
@@ -319,6 +331,8 @@ static const struct eth_dev_ops eth_igb_ops = {
.rss_hash_conf_get = eth_igb_rss_hash_conf_get,
.filter_ctrl = eth_igb_filter_ctrl,
.set_mc_addr_list = eth_igb_set_mc_addr_list,
+ .rxq_info_get = igb_rxq_info_get,
+ .txq_info_get = igb_txq_info_get,
.timesync_enable = igb_timesync_enable,
.timesync_disable = igb_timesync_disable,
.timesync_read_rx_timestamp = igb_timesync_read_rx_timestamp,
@@ -349,6 +363,8 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
.tx_queue_setup = eth_igb_tx_queue_setup,
.tx_queue_release = eth_igb_tx_queue_release,
.set_mc_addr_list = eth_igb_set_mc_addr_list,
+ .rxq_info_get = igb_rxq_info_get,
+ .txq_info_get = igb_txq_info_get,
.mac_addr_set = igbvf_default_mac_addr_set,
.get_reg_length = igbvf_get_reg_length,
.get_reg = igbvf_get_regs,
@@ -1570,6 +1586,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
},
.txq_flags = 0,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
static void
@@ -1621,6 +1640,9 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
},
.txq_flags = 0,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 19905fd..cca3300 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1148,25 +1148,12 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
- * This will also optimize cache line size effect.
- * H/W supports up to cache line size 128.
- */
-#define IGB_ALIGN 128
-
-/*
* Maximum number of Ring Descriptors.
*
* Since RDLEN/TDLEN should be multiple of 128bytes, the number of ring
* desscriptors should meet the following condition:
* (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
*/
-#define IGB_MIN_RING_DESC 32
-#define IGB_MAX_RING_DESC 4096
-
static const struct rte_memzone *
ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
uint16_t queue_id, uint32_t ring_size, int socket_id)
@@ -1183,10 +1170,10 @@ ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
#ifdef RTE_LIBRTE_XEN_DOM0
return rte_memzone_reserve_bounded(z_name, ring_size,
- socket_id, 0, IGB_ALIGN, RTE_PGSIZE_2M);
+ socket_id, 0, E1000_ALIGN, RTE_PGSIZE_2M);
#else
return rte_memzone_reserve_aligned(z_name, ring_size,
- socket_id, 0, IGB_ALIGN);
+ socket_id, 0, E1000_ALIGN);
#endif
}
@@ -1282,10 +1269,11 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of transmit descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of IGB_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(union e1000_adv_tx_desc)) % IGB_ALIGN) != 0 ||
- (nb_desc > IGB_MAX_RING_DESC) || (nb_desc < IGB_MIN_RING_DESC)) {
+ if (nb_desc % IGB_TXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return -EINVAL;
}
@@ -1321,7 +1309,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- size = sizeof(union e1000_adv_tx_desc) * IGB_MAX_RING_DESC;
+ size = sizeof(union e1000_adv_tx_desc) * E1000_MAX_RING_DESC;
tz = ring_dma_zone_reserve(dev, "tx_ring", queue_idx,
size, socket_id);
if (tz == NULL) {
@@ -1430,10 +1418,11 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of receive descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of IGB_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(union e1000_adv_rx_desc)) % IGB_ALIGN) != 0 ||
- (nb_desc > IGB_MAX_RING_DESC) || (nb_desc < IGB_MIN_RING_DESC)) {
+ if (nb_desc % IGB_RXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -1469,7 +1458,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- size = sizeof(union e1000_adv_rx_desc) * IGB_MAX_RING_DESC;
+ size = sizeof(union e1000_adv_rx_desc) * E1000_MAX_RING_DESC;
rz = ring_dma_zone_reserve(dev, "rx_ring", queue_idx, size, socket_id);
if (rz == NULL) {
igb_rx_queue_release(rxq);
@@ -2482,3 +2471,34 @@ eth_igbvf_tx_init(struct rte_eth_dev *dev)
}
}
+
+void
+igb_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct igb_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+}
+
+void
+igb_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct igb_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+}
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 5/9] fm10k: add HW specific desc_lim data into dev_info
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (3 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 4/9] e1000: " Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 6/9] cxgbe: " Konstantin Ananyev
` (3 subsequent siblings)
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/fm10k/fm10k_ethdev.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index a69c990..9588dab 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -964,6 +964,17 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = FM10K_MAX_RX_DESC,
+ .nb_min = FM10K_MIN_RX_DESC,
+ .nb_align = FM10K_MULT_RX_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = FM10K_MAX_TX_DESC,
+ .nb_min = FM10K_MIN_TX_DESC,
+ .nb_align = FM10K_MULT_TX_DESC,
+ };
}
static int
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 6/9] cxgbe: add HW specific desc_lim data into dev_info
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (4 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 7/9] vmxnet3: " Konstantin Ananyev
` (2 subsequent siblings)
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index a8e057b..920e071 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -141,6 +141,12 @@ static void cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
struct adapter *adapter = pi->adapter;
int max_queues = adapter->sge.max_ethqsets / adapter->params.nports;
+ static const struct rte_eth_desc_lim cxgbe_desc_lim = {
+ .nb_max = CXGBE_MAX_RING_DESC_SIZE,
+ .nb_min = CXGBE_MIN_RING_DESC_SIZE,
+ .nb_align = 1,
+ };
+
device_info->min_rx_bufsize = CXGBE_MIN_RX_BUFSIZE;
device_info->max_rx_pktlen = CXGBE_MAX_RX_PKTLEN;
device_info->max_rx_queues = max_queues;
@@ -162,6 +168,9 @@ static void cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_TCP_TSO;
device_info->reta_size = pi->rss_size;
+
+ device_info->rx_desc_lim = cxgbe_desc_lim;
+ device_info->tx_desc_lim = cxgbe_desc_lim;
}
static void cxgbe_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 7/9] vmxnet3: add HW specific desc_lim data into dev_info
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (5 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 6/9] cxgbe: " Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get() Konstantin Ananyev
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a70be5c..3745b7d 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -677,6 +677,18 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
dev_info->default_txconf.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = VMXNET3_RX_RING_MAX_SIZE,
+ .nb_min = VMXNET3_DEF_RX_RING_SIZE,
+ .nb_align = 1,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = VMXNET3_TX_RING_MAX_SIZE,
+ .nb_min = VMXNET3_DEF_TX_RING_SIZE,
+ .nb_align = 1,
+ };
}
/* return 0 means link status changed, -1 means not changed */
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 8/9] testpmd: add new command to display RX/TX queue information
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (6 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 7/9] vmxnet3: " Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get() Konstantin Ananyev
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test-pmd/cmdline.c | 48 +++++++++++++++++++++++++++++++
app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 2 ++
3 files changed, 127 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0f8f48f..ea2b8a8 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -5305,6 +5305,53 @@ cmdline_parse_inst_t cmd_showport = {
},
};
+/* *** SHOW QUEUE INFO *** */
+struct cmd_showqueue_result {
+ cmdline_fixed_string_t show;
+ cmdline_fixed_string_t type;
+ cmdline_fixed_string_t what;
+ uint8_t portnum;
+ uint16_t queuenum;
+};
+
+static void
+cmd_showqueue_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_showqueue_result *res = parsed_result;
+
+ if (!strcmp(res->type, "rxq"))
+ rx_queue_infos_display(res->portnum, res->queuenum);
+ else if (!strcmp(res->type, "txq"))
+ tx_queue_infos_display(res->portnum, res->queuenum);
+}
+
+cmdline_parse_token_string_t cmd_showqueue_show =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, show, "show");
+cmdline_parse_token_string_t cmd_showqueue_type =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, type, "rxq#txq");
+cmdline_parse_token_string_t cmd_showqueue_what =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, what, "info");
+cmdline_parse_token_num_t cmd_showqueue_portnum =
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, portnum, UINT8);
+cmdline_parse_token_num_t cmd_showqueue_queuenum =
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, queuenum, UINT16);
+
+cmdline_parse_inst_t cmd_showqueue = {
+ .f = cmd_showqueue_parsed,
+ .data = NULL,
+ .help_str = "show rxq|txq info <port number> <queue_number>",
+ .tokens = {
+ (void *)&cmd_showqueue_show,
+ (void *)&cmd_showqueue_type,
+ (void *)&cmd_showqueue_what,
+ (void *)&cmd_showqueue_portnum,
+ (void *)&cmd_showqueue_queuenum,
+ NULL,
+ },
+};
+
/* *** READ PORT REGISTER *** */
struct cmd_read_reg_result {
cmdline_fixed_string_t read;
@@ -8910,6 +8957,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_help_long,
(cmdline_parse_inst_t *)&cmd_quit,
(cmdline_parse_inst_t *)&cmd_showport,
+ (cmdline_parse_inst_t *)&cmd_showqueue,
(cmdline_parse_inst_t *)&cmd_showportall,
(cmdline_parse_inst_t *)&cmd_showcfg,
(cmdline_parse_inst_t *)&cmd_start,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cf2aa6e..aad2ab6 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -293,6 +293,69 @@ nic_stats_mapping_display(portid_t port_id)
}
void
+rx_queue_infos_display(portid_t port_id, uint16_t queue_id)
+{
+ struct rte_eth_rxq_info qinfo;
+ int32_t rc;
+ static const char *info_border = "*********************";
+
+ rc = rte_eth_rx_queue_info_get(port_id, queue_id, &qinfo);
+ if (rc != 0) {
+ printf("Failed to retrieve information for port: %hhu, "
+ "RX queue: %hu\nerror desc: %s(%d)\n",
+ port_id, queue_id, strerror(-rc), rc);
+ return;
+ }
+
+ printf("\n%s Infos for port %-2u, RX queue %-2u %s",
+ info_border, port_id, queue_id, info_border);
+
+ printf("\nMempool: %s", (qinfo.mp == NULL) ? "NULL" : qinfo.mp->name);
+ printf("\nRX prefetch threshold: %hhu", qinfo.conf.rx_thresh.pthresh);
+ printf("\nRX host threshold: %hhu", qinfo.conf.rx_thresh.hthresh);
+ printf("\nRX writeback threshold: %hhu", qinfo.conf.rx_thresh.wthresh);
+ printf("\nRX free threshold: %hu", qinfo.conf.rx_free_thresh);
+ printf("\nRX drop packets: %s",
+ (qinfo.conf.rx_drop_en != 0) ? "on" : "off");
+ printf("\nRX deferred start: %s",
+ (qinfo.conf.rx_deferred_start != 0) ? "on" : "off");
+ printf("\nRX scattered packets: %s",
+ (qinfo.scattered_rx != 0) ? "on" : "off");
+ printf("\nNumber of RXDs: %hu", qinfo.nb_desc);
+ printf("\n");
+}
+
+void
+tx_queue_infos_display(portid_t port_id, uint16_t queue_id)
+{
+ struct rte_eth_txq_info qinfo;
+ int32_t rc;
+ static const char *info_border = "*********************";
+
+ rc = rte_eth_tx_queue_info_get(port_id, queue_id, &qinfo);
+ if (rc != 0) {
+ printf("Failed to retrieve information for port: %hhu, "
+ "TX queue: %hu\nerror desc: %s(%d)\n",
+ port_id, queue_id, strerror(-rc), rc);
+ return;
+ }
+
+ printf("\n%s Infos for port %-2u, TX queue %-2u %s",
+ info_border, port_id, queue_id, info_border);
+
+ printf("\nTX prefetch threshold: %hhu", qinfo.conf.tx_thresh.pthresh);
+ printf("\nTX host threshold: %hhu", qinfo.conf.tx_thresh.hthresh);
+ printf("\nTX writeback threshold: %hhu", qinfo.conf.tx_thresh.wthresh);
+ printf("\nTX RS threshold: %hu", qinfo.conf.tx_rs_thresh);
+ printf("\nTX free threshold: %hu", qinfo.conf.tx_free_thresh);
+ printf("\nTX flags: %#x", qinfo.conf.txq_flags);
+ printf("\nTX deferred start: %s",
+ (qinfo.conf.tx_deferred_start != 0) ? "on" : "off");
+ printf("\nNumber of TXDs: %hu", qinfo.nb_desc);
+ printf("\n");
+}
+
+void
port_infos_display(portid_t port_id)
{
struct rte_port *port;
@@ -380,6 +443,20 @@ port_infos_display(portid_t port_id)
printf(" %s\n", (p ? p : "unknown"));
}
}
+
+ printf("Max possible RX queues: %u\n", dev_info.max_rx_queues);
+ printf("Max possible number of RXDs per queue: %hu\n",
+ dev_info.rx_desc_lim.nb_max);
+ printf("Min possible number of RXDs per queue: %hu\n",
+ dev_info.rx_desc_lim.nb_min);
+ printf("RXDs number alignment: %hu\n", dev_info.rx_desc_lim.nb_align);
+
+ printf("Max possible TX queues: %u\n", dev_info.max_tx_queues);
+ printf("Max possible number of TXDs per queue: %hu\n",
+ dev_info.tx_desc_lim.nb_max);
+ printf("Min possible number of TXDs per queue: %hu\n",
+ dev_info.tx_desc_lim.nb_min);
+ printf("TXDs number alignment: %hu\n", dev_info.tx_desc_lim.nb_align);
}
int
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d287274..2551704 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -480,6 +480,8 @@ void nic_xstats_display(portid_t port_id);
void nic_xstats_clear(portid_t port_id);
void nic_stats_mapping_display(portid_t port_id);
void port_infos_display(portid_t port_id);
+void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id);
+void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id);
void fwd_lcores_config_display(void);
void fwd_config_display(void);
void rxtx_config_display(void);
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get()
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
` (7 preceding siblings ...)
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
@ 2015-10-22 12:06 ` Konstantin Ananyev
8 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_2_2.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 4f75cff..33ea399 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -9,6 +9,11 @@ New Features
* Added support for Jumbo Frames.
* Optimize forwarding performance for Chelsio T5 40GbE cards.
+* **Add new API into rte_ethdev to retrieve RX/TX queue information.**
+
+ * Add the ability for the upper layer to query RX/TX queue information.
+ * Add into rte_eth_dev_info new fields to represent information about
+ RX/TX descriptors min/max/alig nnumbers per queue for the device.
Resolved Issues
---------------
@@ -94,6 +99,8 @@ API Changes
* The deprecated ring PMD functions are removed:
rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
+* New functions rte_eth_rx_queue_info_get() and rte_eth_tx_queue_info_get()
+ are introduced.
ABI Changes
-----------
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 0/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-28 9:55 ` Remy Horton
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
` (8 subsequent siblings)
9 siblings, 1 reply; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Add the ability for the upper layer to query:
1) configured RX/TX queue information.
2) information about RX/TX descriptors min/max/align
numbers per queue for the device.
v2 changes:
- Add formal check for the qinfo input parameter.
- As suggested rename 'rx_qinfo/tx_qinfo' to 'rxq_info/txq_info'
v3 changes:
- Updated rte_ether_version.map
- Merged with latest changes
v4 changes:
- rte_ether_version.map: move new functions into DPDK_2.1 sub-space.
v5 changes:
- adressed previous code-review comments
- rte_ether_version.map: move new functions into DPDK_2.2 sub-space.
- added new fields into rte_eth_dev_info
v6 chages:
- respin to comply with latest dpdk.org
- update release_notes, section "New Features"
v7 changes:
- update release notes, sections: "API Changes", "ABI Changes"
Konstantin Ananyev (9):
ethdev: add new API to retrieve RX/TX queue information
i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
fm10k: add HW specific desc_lim data into dev_info
cxgbe: add HW specific desc_lim data into dev_info
vmxnet3: add HW specific desc_lim data into dev_info
testpmd: add new command to display RX/TX queue information
doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
app/test-pmd/cmdline.c | 48 +++++++++++++++++++
app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 2 +
doc/guides/rel_notes/release_2_2.rst | 13 ++++++
drivers/net/cxgbe/cxgbe_ethdev.c | 9 ++++
drivers/net/e1000/e1000_ethdev.h | 36 ++++++++++++++
drivers/net/e1000/em_ethdev.c | 14 ++++++
drivers/net/e1000/em_rxtx.c | 71 ++++++++++++++++------------
drivers/net/e1000/igb_ethdev.c | 22 +++++++++
drivers/net/e1000/igb_rxtx.c | 66 +++++++++++++++++---------
drivers/net/fm10k/fm10k_ethdev.c | 11 +++++
drivers/net/i40e/i40e_ethdev.c | 14 ++++++
drivers/net/i40e/i40e_ethdev.h | 5 ++
drivers/net/i40e/i40e_ethdev_vf.c | 12 +++++
drivers/net/i40e/i40e_rxtx.c | 37 +++++++++++++++
drivers/net/ixgbe/ixgbe_ethdev.c | 23 +++++++++
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +++
drivers/net/ixgbe/ixgbe_rxtx.c | 68 +++++++++++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 21 +++++++++
drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 +++++
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
23 files changed, 648 insertions(+), 80 deletions(-)
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 1/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
` (7 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Add the ability for the upper layer to query RX/TX queue information.
Add into rte_eth_dev_info new fields to represent information about
RX/TX descriptors min/max/alig nnumbers per queue for the device.
Add new structures:
struct rte_eth_rxq_info
struct rte_eth_txq_info
new functions:
rte_eth_rx_queue_info_get
rte_eth_tx_queue_info_get
into rte_etdev API.
Left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage.
Add new fields:
rx_desc_lim
tx_desc_lim
into rte_eth_dev_info.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
3 files changed, 159 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..d18ecb5 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1447,6 +1447,19 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
+ nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
+ nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
+
+ PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+ "should be: <= %hu, = %hu, and a product of %hu\n",
+ nb_rx_desc,
+ dev_info.rx_desc_lim.nb_max,
+ dev_info.rx_desc_lim.nb_min,
+ dev_info.rx_desc_lim.nb_align);
+ return -EINVAL;
+ }
+
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
@@ -1786,11 +1799,18 @@ void
rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
+ const struct rte_eth_desc_lim lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ };
VALID_PORTID_OR_RET(port_id);
dev = &rte_eth_devices[port_id];
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
+ dev_info->rx_desc_lim = lim;
+ dev_info->tx_desc_lim = lim;
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -3221,6 +3241,54 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
}
int
+rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
+rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
rte_eth_dev_set_mc_addr_list(uint8_t port_id,
struct ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..4d7b6f2 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -653,6 +653,15 @@ struct rte_eth_txconf {
};
/**
+ * A structure contains information about HW descriptor ring limitations.
+ */
+struct rte_eth_desc_lim {
+ uint16_t nb_max; /**< Max allowed number of descriptors. */
+ uint16_t nb_min; /**< Min allowed number of descriptors. */
+ uint16_t nb_align; /**< Number of descriptors should be aligned to. */
+};
+
+/**
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
@@ -837,6 +846,8 @@ struct rte_eth_dev_info {
uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
+ struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
+ struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
};
/** Maximum name length for extended statistics counters */
@@ -854,6 +865,26 @@ struct rte_eth_xstats {
uint64_t value;
};
+/**
+ * Ethernet device RX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_rxq_info {
+ struct rte_mempool *mp; /**< mempool used by that queue. */
+ struct rte_eth_rxconf conf; /**< queue config parameters. */
+ uint8_t scattered_rx; /**< scattered packets RX supported. */
+ uint16_t nb_desc; /**< configured number of RXDs. */
+} __rte_cache_aligned;
+
+/**
+ * Ethernet device TX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_txq_info {
+ struct rte_eth_txconf conf; /**< queue config parameters. */
+ uint16_t nb_desc; /**< configured number of TXDs. */
+} __rte_cache_aligned;
+
struct rte_eth_dev;
struct rte_eth_dev_callback;
@@ -965,6 +996,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo);
+
+typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+
typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t mtu);
/**< @internal Set MTU. */
@@ -1301,9 +1338,13 @@ struct eth_dev_ops {
rss_hash_update_t rss_hash_update;
/** Get current RSS hash configuration. */
rss_hash_conf_get_t rss_hash_conf_get;
- eth_filter_ctrl_t filter_ctrl; /**< common filter control*/
+ eth_filter_ctrl_t filter_ctrl;
+ /**< common filter control. */
eth_set_mc_addr_list_t set_mc_addr_list; /**< set list of mcast addrs */
-
+ eth_rxq_info_get_t rxq_info_get;
+ /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get;
+ /**< retrieve TX queue information. */
/** Turn IEEE1588/802.1AS timestamping on. */
eth_timesync_enable_t timesync_enable;
/** Turn IEEE1588/802.1AS timestamping off. */
@@ -3441,6 +3482,46 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
struct rte_eth_rxtx_callback *user_cb);
/**
+ * Retrieve information about given port's RX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The RX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+/**
+ * Retrieve information about given port's TX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The TX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+/*
* Retrieve number of available registers for access
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..1fb4b87 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,11 @@ DPDK_2.1 {
rte_eth_timesync_read_tx_timestamp;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_eth_rx_queue_info_get;
+ rte_eth_tx_queue_info_get;
+
+} DPDK_2.1;
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 3/9] ixgbe: " Konstantin Ananyev
` (6 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
This patch assumes that the patch:
i40e: fix wrong alignment for the number of HW descriptors
already applied.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 14 ++++++++++++++
drivers/net/i40e/i40e_ethdev.h | 5 +++++
drivers/net/i40e/i40e_ethdev_vf.c | 12 ++++++++++++
drivers/net/i40e/i40e_rxtx.c | 37 +++++++++++++++++++++++++++++++++++++
4 files changed, 68 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..cbc1985 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -283,6 +283,8 @@ static const struct eth_dev_ops i40e_eth_dev_ops = {
.udp_tunnel_add = i40e_dev_udp_tunnel_add,
.udp_tunnel_del = i40e_dev_udp_tunnel_del,
.filter_ctrl = i40e_dev_filter_ctrl,
+ .rxq_info_get = i40e_rxq_info_get,
+ .txq_info_get = i40e_txq_info_get,
.mirror_rule_set = i40e_mirror_rule_set,
.mirror_rule_reset = i40e_mirror_rule_reset,
.timesync_enable = i40e_timesync_enable,
@@ -1674,6 +1676,18 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
if (pf->flags & I40E_FLAG_VMDQ) {
dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
dev_info->vmdq_queue_base = dev_info->max_rx_queues;
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..4748392 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -502,6 +502,11 @@ int i40e_fdir_ctrl_func(struct rte_eth_dev *dev,
enum rte_filter_op filter_op,
void *arg);
+void i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
/* I40E_DEV_PRIVATE_TO */
#define I40E_DEV_PRIVATE_TO_PF(adapter) \
(&((struct i40e_adapter *)adapter)->pf)
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index b694400..5dad12d 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1756,6 +1756,18 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = I40E_MAX_RING_DESC,
+ .nb_min = I40E_MIN_RING_DESC,
+ .nb_align = I40E_ALIGN_RING_DESC,
+ };
}
static void
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 260e580..fa1451e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -3063,3 +3063,40 @@ i40e_fdir_setup_rx_resources(struct i40e_pf *pf)
return I40E_SUCCESS;
}
+
+void
+i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct i40e_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mp;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct i40e_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.txq_flags = txq->txq_flags;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 3/9] ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (2 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 4/9] e1000: " Konstantin Ananyev
` (5 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 23 ++++++++++++++
drivers/net/ixgbe/ixgbe_ethdev.h | 6 ++++
drivers/net/ixgbe/ixgbe_rxtx.c | 68 +++++++++++++++++++++++++---------------
drivers/net/ixgbe/ixgbe_rxtx.h | 21 +++++++++++++
4 files changed, 93 insertions(+), 25 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ec2918c..4769bb0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -386,6 +386,18 @@ static const struct rte_pci_id pci_id_ixgbevf_map[] = {
};
+static const struct rte_eth_desc_lim rx_desc_lim = {
+ .nb_max = IXGBE_MAX_RING_DESC,
+ .nb_min = IXGBE_MIN_RING_DESC,
+ .nb_align = IXGBE_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+ .nb_max = IXGBE_MAX_RING_DESC,
+ .nb_min = IXGBE_MIN_RING_DESC,
+ .nb_align = IXGBE_TXD_ALIGN,
+};
+
static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.dev_configure = ixgbe_dev_configure,
.dev_start = ixgbe_dev_start,
@@ -456,6 +468,8 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.rss_hash_conf_get = ixgbe_dev_rss_hash_conf_get,
.filter_ctrl = ixgbe_dev_filter_ctrl,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
+ .rxq_info_get = ixgbe_rxq_info_get,
+ .txq_info_get = ixgbe_txq_info_get,
.timesync_enable = ixgbe_timesync_enable,
.timesync_disable = ixgbe_timesync_disable,
.timesync_read_rx_timestamp = ixgbe_timesync_read_rx_timestamp,
@@ -494,6 +508,8 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
.mac_addr_add = ixgbevf_add_mac_addr,
.mac_addr_remove = ixgbevf_remove_mac_addr,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
+ .rxq_info_get = ixgbe_rxq_info_get,
+ .txq_info_get = ixgbe_txq_info_get,
.mac_addr_set = ixgbevf_set_default_mac_addr,
.get_reg_length = ixgbevf_get_reg_length,
.get_reg = ixgbevf_get_regs,
@@ -2396,6 +2412,10 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
+
dev_info->hash_key_size = IXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
@@ -2449,6 +2469,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c3d4f4f..d16f476 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -351,6 +351,12 @@ int ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
int ixgbevf_dev_rx_init(struct rte_eth_dev *dev);
void ixgbevf_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a598a72..ba08588 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -1821,25 +1821,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
**********************************************************************/
/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. This will
- * also optimize cache line size effect. H/W supports up to cache line size 128.
- */
-#define IXGBE_ALIGN 128
-
-/*
- * Maximum number of Ring Descriptors.
- *
- * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
- * descriptors should meet the following condition:
- * (num_ring_desc * sizeof(rx/tx descriptor)) % 128 == 0
- */
-#define IXGBE_MIN_RING_DESC 32
-#define IXGBE_MAX_RING_DESC 4096
-
-/*
* Create memzone for HW rings. malloc can't be used as the physical address is
* needed. If the memzone is already created, then this function returns a ptr
* to the old one.
@@ -2007,9 +1988,9 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
* It must not exceed hardware maximum, and must be multiple
* of IXGBE_ALIGN.
*/
- if (((nb_desc * sizeof(union ixgbe_adv_tx_desc)) % IXGBE_ALIGN) != 0 ||
- (nb_desc > IXGBE_MAX_RING_DESC) ||
- (nb_desc < IXGBE_MIN_RING_DESC)) {
+ if (nb_desc % IXGBE_TXD_ALIGN != 0 ||
+ (nb_desc > IXGBE_MAX_RING_DESC) ||
+ (nb_desc < IXGBE_MIN_RING_DESC)) {
return -EINVAL;
}
@@ -2374,9 +2355,9 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
* It must not exceed hardware maximum, and must be multiple
* of IXGBE_ALIGN.
*/
- if (((nb_desc * sizeof(union ixgbe_adv_rx_desc)) % IXGBE_ALIGN) != 0 ||
- (nb_desc > IXGBE_MAX_RING_DESC) ||
- (nb_desc < IXGBE_MIN_RING_DESC)) {
+ if (nb_desc % IXGBE_RXD_ALIGN != 0 ||
+ (nb_desc > IXGBE_MAX_RING_DESC) ||
+ (nb_desc < IXGBE_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -4649,6 +4630,43 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
+void
+ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct ixgbe_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct ixgbe_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.txq_flags = txq->txq_flags;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
/*
* [VF] Initializes Receive Unit.
*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index b9eca67..475a800 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -34,6 +34,27 @@
#ifndef _IXGBE_RXTX_H_
#define _IXGBE_RXTX_H_
+/*
+ * Rings setup and release.
+ *
+ * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
+ * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. This will
+ * also optimize cache line size effect. H/W supports up to cache line size 128.
+ */
+#define IXGBE_ALIGN 128
+
+#define IXGBE_RXD_ALIGN (IXGBE_ALIGN / sizeof(union ixgbe_adv_rx_desc))
+#define IXGBE_TXD_ALIGN (IXGBE_ALIGN / sizeof(union ixgbe_adv_tx_desc))
+
+/*
+ * Maximum number of Ring Descriptors.
+ *
+ * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
+ * descriptors should meet the following condition:
+ * (num_ring_desc * sizeof(rx/tx descriptor)) % 128 == 0
+ */
+#define IXGBE_MIN_RING_DESC 32
+#define IXGBE_MAX_RING_DESC 4096
#define RTE_PMD_IXGBE_TX_MAX_BURST 32
#define RTE_PMD_IXGBE_RX_MAX_BURST 32
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 4/9] e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (3 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 3/9] ixgbe: " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
` (4 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/e1000/e1000_ethdev.h | 36 ++++++++++++++++++++
drivers/net/e1000/em_ethdev.c | 14 ++++++++
drivers/net/e1000/em_rxtx.c | 71 +++++++++++++++++++++++-----------------
drivers/net/e1000/igb_ethdev.c | 22 +++++++++++++
drivers/net/e1000/igb_rxtx.c | 66 ++++++++++++++++++++++++-------------
5 files changed, 156 insertions(+), 53 deletions(-)
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 4e69e44..3c6f613 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -108,6 +108,30 @@
ETH_RSS_IPV6_TCP_EX | \
ETH_RSS_IPV6_UDP_EX)
+/*
+ * Maximum number of Ring Descriptors.
+ *
+ * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
+ * desscriptors should meet the following condition:
+ * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
+ */
+#define E1000_MIN_RING_DESC 32
+#define E1000_MAX_RING_DESC 4096
+
+/*
+ * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
+ * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
+ * This will also optimize cache line size effect.
+ * H/W supports up to cache line size 128.
+ */
+#define E1000_ALIGN 128
+
+#define IGB_RXD_ALIGN (E1000_ALIGN / sizeof(union e1000_adv_rx_desc))
+#define IGB_TXD_ALIGN (E1000_ALIGN / sizeof(union e1000_adv_tx_desc))
+
+#define EM_RXD_ALIGN (E1000_ALIGN / sizeof(struct e1000_rx_desc))
+#define EM_TXD_ALIGN (E1000_ALIGN / sizeof(struct e1000_data_desc))
+
/* structure for interrupt relative data */
struct e1000_interrupt {
uint32_t flags;
@@ -307,6 +331,12 @@ void igb_pf_mbx_process(struct rte_eth_dev *eth_dev);
int igb_pf_host_configure(struct rte_eth_dev *eth_dev);
+void igb_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void igb_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
/*
* RX/TX EM function prototypes
*/
@@ -343,6 +373,12 @@ uint16_t eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+void em_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void em_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
void igb_pf_host_uninit(struct rte_eth_dev *dev);
#endif /* _E1000_ETHDEV_H_ */
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 912f5dd..0cbc228 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -166,6 +166,8 @@ static const struct eth_dev_ops eth_em_ops = {
.mac_addr_add = eth_em_rar_set,
.mac_addr_remove = eth_em_rar_clear,
.set_mc_addr_list = eth_em_set_mc_addr_list,
+ .rxq_info_get = em_rxq_info_get,
+ .txq_info_get = em_txq_info_get,
};
/**
@@ -933,6 +935,18 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = EM_RXD_ALIGN,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = EM_TXD_ALIGN,
+ };
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 3b8776d..03e1bc2 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1081,26 +1081,6 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return (nb_rx);
}
-/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
- * This will also optimize cache line size effect.
- * H/W supports up to cache line size 128.
- */
-#define EM_ALIGN 128
-
-/*
- * Maximum number of Ring Descriptors.
- *
- * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring
- * desscriptors should meet the following condition:
- * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
- */
-#define EM_MIN_RING_DESC 32
-#define EM_MAX_RING_DESC 4096
-
#define EM_MAX_BUF_SIZE 16384
#define EM_RCTL_FLXBUF_STEP 1024
@@ -1210,11 +1190,11 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of transmit descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of EM_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(*txq->tx_ring)) % EM_ALIGN) != 0 ||
- (nb_desc > EM_MAX_RING_DESC) ||
- (nb_desc < EM_MIN_RING_DESC)) {
+ if (nb_desc % EM_TXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return -(EINVAL);
}
@@ -1272,7 +1252,7 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- tsize = sizeof (txq->tx_ring[0]) * EM_MAX_RING_DESC;
+ tsize = sizeof(txq->tx_ring[0]) * E1000_MAX_RING_DESC;
if ((tz = ring_dma_zone_reserve(dev, "tx_ring", queue_idx, tsize,
socket_id)) == NULL)
return (-ENOMEM);
@@ -1375,11 +1355,11 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of receive descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of EM_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(rxq->rx_ring[0])) % EM_ALIGN) != 0 ||
- (nb_desc > EM_MAX_RING_DESC) ||
- (nb_desc < EM_MIN_RING_DESC)) {
+ if (nb_desc % EM_RXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -1399,7 +1379,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
}
/* Allocate RX ring for max possible mumber of hardware descriptors. */
- rsize = sizeof (rxq->rx_ring[0]) * EM_MAX_RING_DESC;
+ rsize = sizeof(rxq->rx_ring[0]) * E1000_MAX_RING_DESC;
if ((rz = ring_dma_zone_reserve(dev, "rx_ring", queue_idx, rsize,
socket_id)) == NULL)
return (-ENOMEM);
@@ -1881,3 +1861,34 @@ eth_em_tx_init(struct rte_eth_dev *dev)
/* This write will effectively turn on the transmit unit. */
E1000_WRITE_REG(hw, E1000_TCTL, tctl);
}
+
+void
+em_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct em_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+}
+
+void
+em_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct em_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 848ef6e..73c067e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -281,6 +281,18 @@ static const struct rte_pci_id pci_id_igbvf_map[] = {
{0},
};
+static const struct rte_eth_desc_lim rx_desc_lim = {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = IGB_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+ .nb_max = E1000_MAX_RING_DESC,
+ .nb_min = E1000_MIN_RING_DESC,
+ .nb_align = IGB_RXD_ALIGN,
+};
+
static const struct eth_dev_ops eth_igb_ops = {
.dev_configure = eth_igb_configure,
.dev_start = eth_igb_start,
@@ -319,6 +331,8 @@ static const struct eth_dev_ops eth_igb_ops = {
.rss_hash_conf_get = eth_igb_rss_hash_conf_get,
.filter_ctrl = eth_igb_filter_ctrl,
.set_mc_addr_list = eth_igb_set_mc_addr_list,
+ .rxq_info_get = igb_rxq_info_get,
+ .txq_info_get = igb_txq_info_get,
.timesync_enable = igb_timesync_enable,
.timesync_disable = igb_timesync_disable,
.timesync_read_rx_timestamp = igb_timesync_read_rx_timestamp,
@@ -349,6 +363,8 @@ static const struct eth_dev_ops igbvf_eth_dev_ops = {
.tx_queue_setup = eth_igb_tx_queue_setup,
.tx_queue_release = eth_igb_tx_queue_release,
.set_mc_addr_list = eth_igb_set_mc_addr_list,
+ .rxq_info_get = igb_rxq_info_get,
+ .txq_info_get = igb_txq_info_get,
.mac_addr_set = igbvf_default_mac_addr_set,
.get_reg_length = igbvf_get_reg_length,
.get_reg = igbvf_get_regs,
@@ -1570,6 +1586,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
},
.txq_flags = 0,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
static void
@@ -1621,6 +1640,9 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
},
.txq_flags = 0,
};
+
+ dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 19905fd..cca3300 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1148,25 +1148,12 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
/*
- * Rings setup and release.
- *
- * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
- * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
- * This will also optimize cache line size effect.
- * H/W supports up to cache line size 128.
- */
-#define IGB_ALIGN 128
-
-/*
* Maximum number of Ring Descriptors.
*
* Since RDLEN/TDLEN should be multiple of 128bytes, the number of ring
* desscriptors should meet the following condition:
* (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0
*/
-#define IGB_MIN_RING_DESC 32
-#define IGB_MAX_RING_DESC 4096
-
static const struct rte_memzone *
ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
uint16_t queue_id, uint32_t ring_size, int socket_id)
@@ -1183,10 +1170,10 @@ ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
#ifdef RTE_LIBRTE_XEN_DOM0
return rte_memzone_reserve_bounded(z_name, ring_size,
- socket_id, 0, IGB_ALIGN, RTE_PGSIZE_2M);
+ socket_id, 0, E1000_ALIGN, RTE_PGSIZE_2M);
#else
return rte_memzone_reserve_aligned(z_name, ring_size,
- socket_id, 0, IGB_ALIGN);
+ socket_id, 0, E1000_ALIGN);
#endif
}
@@ -1282,10 +1269,11 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of transmit descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of IGB_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(union e1000_adv_tx_desc)) % IGB_ALIGN) != 0 ||
- (nb_desc > IGB_MAX_RING_DESC) || (nb_desc < IGB_MIN_RING_DESC)) {
+ if (nb_desc % IGB_TXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return -EINVAL;
}
@@ -1321,7 +1309,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- size = sizeof(union e1000_adv_tx_desc) * IGB_MAX_RING_DESC;
+ size = sizeof(union e1000_adv_tx_desc) * E1000_MAX_RING_DESC;
tz = ring_dma_zone_reserve(dev, "tx_ring", queue_idx,
size, socket_id);
if (tz == NULL) {
@@ -1430,10 +1418,11 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
/*
* Validate number of receive descriptors.
* It must not exceed hardware maximum, and must be multiple
- * of IGB_ALIGN.
+ * of E1000_ALIGN.
*/
- if (((nb_desc * sizeof(union e1000_adv_rx_desc)) % IGB_ALIGN) != 0 ||
- (nb_desc > IGB_MAX_RING_DESC) || (nb_desc < IGB_MIN_RING_DESC)) {
+ if (nb_desc % IGB_RXD_ALIGN != 0 ||
+ (nb_desc > E1000_MAX_RING_DESC) ||
+ (nb_desc < E1000_MIN_RING_DESC)) {
return (-EINVAL);
}
@@ -1469,7 +1458,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
* handle the maximum ring size is allocated in order to allow for
* resizing in later calls to the queue setup function.
*/
- size = sizeof(union e1000_adv_rx_desc) * IGB_MAX_RING_DESC;
+ size = sizeof(union e1000_adv_rx_desc) * E1000_MAX_RING_DESC;
rz = ring_dma_zone_reserve(dev, "rx_ring", queue_idx, size, socket_id);
if (rz == NULL) {
igb_rx_queue_release(rxq);
@@ -2482,3 +2471,34 @@ eth_igbvf_tx_init(struct rte_eth_dev *dev)
}
}
+
+void
+igb_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct igb_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+}
+
+void
+igb_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct igb_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+}
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 5/9] fm10k: add HW specific desc_lim data into dev_info
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (4 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 4/9] e1000: " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 6/9] cxgbe: " Konstantin Ananyev
` (3 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/fm10k/fm10k_ethdev.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index a69c990..9588dab 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -964,6 +964,17 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
ETH_TXQ_FLAGS_NOOFFLOADS,
};
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = FM10K_MAX_RX_DESC,
+ .nb_min = FM10K_MIN_RX_DESC,
+ .nb_align = FM10K_MULT_RX_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = FM10K_MAX_TX_DESC,
+ .nb_min = FM10K_MIN_TX_DESC,
+ .nb_align = FM10K_MULT_TX_DESC,
+ };
}
static int
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 6/9] cxgbe: add HW specific desc_lim data into dev_info
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (5 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 7/9] vmxnet3: " Konstantin Ananyev
` (2 subsequent siblings)
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index a8e057b..920e071 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -141,6 +141,12 @@ static void cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
struct adapter *adapter = pi->adapter;
int max_queues = adapter->sge.max_ethqsets / adapter->params.nports;
+ static const struct rte_eth_desc_lim cxgbe_desc_lim = {
+ .nb_max = CXGBE_MAX_RING_DESC_SIZE,
+ .nb_min = CXGBE_MIN_RING_DESC_SIZE,
+ .nb_align = 1,
+ };
+
device_info->min_rx_bufsize = CXGBE_MIN_RX_BUFSIZE;
device_info->max_rx_pktlen = CXGBE_MAX_RX_PKTLEN;
device_info->max_rx_queues = max_queues;
@@ -162,6 +168,9 @@ static void cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_TCP_TSO;
device_info->reta_size = pi->rss_size;
+
+ device_info->rx_desc_lim = cxgbe_desc_lim;
+ device_info->tx_desc_lim = cxgbe_desc_lim;
}
static void cxgbe_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 7/9] vmxnet3: add HW specific desc_lim data into dev_info
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (6 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 6/9] cxgbe: " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-10-31 8:54 ` Yong Wang
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
9 siblings, 1 reply; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a70be5c..3745b7d 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -677,6 +677,18 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
dev_info->default_txconf.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
ETH_TXQ_FLAGS_NOOFFLOADS;
dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = VMXNET3_RX_RING_MAX_SIZE,
+ .nb_min = VMXNET3_DEF_RX_RING_SIZE,
+ .nb_align = 1,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = VMXNET3_TX_RING_MAX_SIZE,
+ .nb_min = VMXNET3_DEF_TX_RING_SIZE,
+ .nb_align = 1,
+ };
}
/* return 0 means link status changed, -1 means not changed */
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (7 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 7/9] vmxnet3: " Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
2015-11-01 23:16 ` Thomas Monjalon
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
9 siblings, 1 reply; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
app/test-pmd/cmdline.c | 48 +++++++++++++++++++++++++++++++
app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 2 ++
3 files changed, 127 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 0f8f48f..ea2b8a8 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -5305,6 +5305,53 @@ cmdline_parse_inst_t cmd_showport = {
},
};
+/* *** SHOW QUEUE INFO *** */
+struct cmd_showqueue_result {
+ cmdline_fixed_string_t show;
+ cmdline_fixed_string_t type;
+ cmdline_fixed_string_t what;
+ uint8_t portnum;
+ uint16_t queuenum;
+};
+
+static void
+cmd_showqueue_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_showqueue_result *res = parsed_result;
+
+ if (!strcmp(res->type, "rxq"))
+ rx_queue_infos_display(res->portnum, res->queuenum);
+ else if (!strcmp(res->type, "txq"))
+ tx_queue_infos_display(res->portnum, res->queuenum);
+}
+
+cmdline_parse_token_string_t cmd_showqueue_show =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, show, "show");
+cmdline_parse_token_string_t cmd_showqueue_type =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, type, "rxq#txq");
+cmdline_parse_token_string_t cmd_showqueue_what =
+ TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, what, "info");
+cmdline_parse_token_num_t cmd_showqueue_portnum =
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, portnum, UINT8);
+cmdline_parse_token_num_t cmd_showqueue_queuenum =
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, queuenum, UINT16);
+
+cmdline_parse_inst_t cmd_showqueue = {
+ .f = cmd_showqueue_parsed,
+ .data = NULL,
+ .help_str = "show rxq|txq info <port number> <queue_number>",
+ .tokens = {
+ (void *)&cmd_showqueue_show,
+ (void *)&cmd_showqueue_type,
+ (void *)&cmd_showqueue_what,
+ (void *)&cmd_showqueue_portnum,
+ (void *)&cmd_showqueue_queuenum,
+ NULL,
+ },
+};
+
/* *** READ PORT REGISTER *** */
struct cmd_read_reg_result {
cmdline_fixed_string_t read;
@@ -8910,6 +8957,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_help_long,
(cmdline_parse_inst_t *)&cmd_quit,
(cmdline_parse_inst_t *)&cmd_showport,
+ (cmdline_parse_inst_t *)&cmd_showqueue,
(cmdline_parse_inst_t *)&cmd_showportall,
(cmdline_parse_inst_t *)&cmd_showcfg,
(cmdline_parse_inst_t *)&cmd_start,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cf2aa6e..aad2ab6 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -293,6 +293,69 @@ nic_stats_mapping_display(portid_t port_id)
}
void
+rx_queue_infos_display(portid_t port_id, uint16_t queue_id)
+{
+ struct rte_eth_rxq_info qinfo;
+ int32_t rc;
+ static const char *info_border = "*********************";
+
+ rc = rte_eth_rx_queue_info_get(port_id, queue_id, &qinfo);
+ if (rc != 0) {
+ printf("Failed to retrieve information for port: %hhu, "
+ "RX queue: %hu\nerror desc: %s(%d)\n",
+ port_id, queue_id, strerror(-rc), rc);
+ return;
+ }
+
+ printf("\n%s Infos for port %-2u, RX queue %-2u %s",
+ info_border, port_id, queue_id, info_border);
+
+ printf("\nMempool: %s", (qinfo.mp == NULL) ? "NULL" : qinfo.mp->name);
+ printf("\nRX prefetch threshold: %hhu", qinfo.conf.rx_thresh.pthresh);
+ printf("\nRX host threshold: %hhu", qinfo.conf.rx_thresh.hthresh);
+ printf("\nRX writeback threshold: %hhu", qinfo.conf.rx_thresh.wthresh);
+ printf("\nRX free threshold: %hu", qinfo.conf.rx_free_thresh);
+ printf("\nRX drop packets: %s",
+ (qinfo.conf.rx_drop_en != 0) ? "on" : "off");
+ printf("\nRX deferred start: %s",
+ (qinfo.conf.rx_deferred_start != 0) ? "on" : "off");
+ printf("\nRX scattered packets: %s",
+ (qinfo.scattered_rx != 0) ? "on" : "off");
+ printf("\nNumber of RXDs: %hu", qinfo.nb_desc);
+ printf("\n");
+}
+
+void
+tx_queue_infos_display(portid_t port_id, uint16_t queue_id)
+{
+ struct rte_eth_txq_info qinfo;
+ int32_t rc;
+ static const char *info_border = "*********************";
+
+ rc = rte_eth_tx_queue_info_get(port_id, queue_id, &qinfo);
+ if (rc != 0) {
+ printf("Failed to retrieve information for port: %hhu, "
+ "TX queue: %hu\nerror desc: %s(%d)\n",
+ port_id, queue_id, strerror(-rc), rc);
+ return;
+ }
+
+ printf("\n%s Infos for port %-2u, TX queue %-2u %s",
+ info_border, port_id, queue_id, info_border);
+
+ printf("\nTX prefetch threshold: %hhu", qinfo.conf.tx_thresh.pthresh);
+ printf("\nTX host threshold: %hhu", qinfo.conf.tx_thresh.hthresh);
+ printf("\nTX writeback threshold: %hhu", qinfo.conf.tx_thresh.wthresh);
+ printf("\nTX RS threshold: %hu", qinfo.conf.tx_rs_thresh);
+ printf("\nTX free threshold: %hu", qinfo.conf.tx_free_thresh);
+ printf("\nTX flags: %#x", qinfo.conf.txq_flags);
+ printf("\nTX deferred start: %s",
+ (qinfo.conf.tx_deferred_start != 0) ? "on" : "off");
+ printf("\nNumber of TXDs: %hu", qinfo.nb_desc);
+ printf("\n");
+}
+
+void
port_infos_display(portid_t port_id)
{
struct rte_port *port;
@@ -380,6 +443,20 @@ port_infos_display(portid_t port_id)
printf(" %s\n", (p ? p : "unknown"));
}
}
+
+ printf("Max possible RX queues: %u\n", dev_info.max_rx_queues);
+ printf("Max possible number of RXDs per queue: %hu\n",
+ dev_info.rx_desc_lim.nb_max);
+ printf("Min possible number of RXDs per queue: %hu\n",
+ dev_info.rx_desc_lim.nb_min);
+ printf("RXDs number alignment: %hu\n", dev_info.rx_desc_lim.nb_align);
+
+ printf("Max possible TX queues: %u\n", dev_info.max_tx_queues);
+ printf("Max possible number of TXDs per queue: %hu\n",
+ dev_info.tx_desc_lim.nb_max);
+ printf("Min possible number of TXDs per queue: %hu\n",
+ dev_info.tx_desc_lim.nb_min);
+ printf("TXDs number alignment: %hu\n", dev_info.tx_desc_lim.nb_align);
}
int
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f925df7..5ea773f 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -479,6 +479,8 @@ void nic_xstats_display(portid_t port_id);
void nic_xstats_clear(portid_t port_id);
void nic_stats_mapping_display(portid_t port_id);
void port_infos_display(portid_t port_id);
+void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id);
+void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id);
void fwd_lcores_config_display(void);
void fwd_config_display(void);
void rxtx_config_display(void);
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
` (8 preceding siblings ...)
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
@ 2015-10-27 12:51 ` Konstantin Ananyev
9 siblings, 0 replies; 26+ messages in thread
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_2_2.rst | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index de6916e..aff6306 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -11,6 +11,11 @@ New Features
* **Added vhost-user multiple queue support.**
+* **Add new API into rte_ethdev to retrieve RX/TX queue information.**
+
+ * Add the ability for the upper layer to query RX/TX queue information.
+ * Add into rte_eth_dev_info new fields to represent information about
+ RX/TX descriptors min/max/alig nnumbers per queue for the device.
Resolved Issues
---------------
@@ -98,6 +103,11 @@ API Changes
* The devargs union field virtual is renamed to virt for C++ compatibility.
+* New functions rte_eth_rx_queue_info_get() and rte_eth_tx_queue_info_get()
+ are introduced.
+
+* New fields rx_desc_lim and tx_desc_lim are added into rte_eth_dev_info
+ structure.
ABI Changes
-----------
@@ -108,6 +118,9 @@ ABI Changes
* The ethdev flow director entries for SCTP were changed.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* New fields rx_desc_lim and tx_desc_lim were added into rte_eth_dev_info
+ structure.
+
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
--
1.8.5.3
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 0/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
@ 2015-10-28 9:55 ` Remy Horton
2015-11-01 23:17 ` Thomas Monjalon
0 siblings, 1 reply; 26+ messages in thread
From: Remy Horton @ 2015-10-28 9:55 UTC (permalink / raw)
To: dev
On 27/10/2015 12:51, Konstantin Ananyev wrote:
> Konstantin Ananyev (9):
> ethdev: add new API to retrieve RX/TX queue information
> i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> fm10k: add HW specific desc_lim data into dev_info
> cxgbe: add HW specific desc_lim data into dev_info
> vmxnet3: add HW specific desc_lim data into dev_info
> testpmd: add new command to display RX/TX queue information
> doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
Acked-by: Remy Horton <remy.horton@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 7/9] vmxnet3: add HW specific desc_lim data into dev_info
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 7/9] vmxnet3: " Konstantin Ananyev
@ 2015-10-31 8:54 ` Yong Wang
2015-11-02 10:33 ` Ananyev, Konstantin
0 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2015-10-31 8:54 UTC (permalink / raw)
To: Konstantin Ananyev, dev
On 10/27/15, 5:51 AM, "Konstantin Ananyev" <konstantin.ananyev@intel.com> wrote:
>Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>---
Acked-by: Yong Wang <yongwang@vmware.com>
Do you plan to implement rxq_info_get and txq_info_get for vmxnet3 in subsequent patches?
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
>diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
>index a70be5c..3745b7d 100644
>--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
>+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
>@@ -677,6 +677,18 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
> dev_info->default_txconf.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
> ETH_TXQ_FLAGS_NOOFFLOADS;
> dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
>+
>+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
>+ .nb_max = VMXNET3_RX_RING_MAX_SIZE,
>+ .nb_min = VMXNET3_DEF_RX_RING_SIZE,
>+ .nb_align = 1,
>+ };
>+
>+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
>+ .nb_max = VMXNET3_TX_RING_MAX_SIZE,
>+ .nb_min = VMXNET3_DEF_TX_RING_SIZE,
>+ .nb_align = 1,
>+ };
> }
>
> /* return 0 means link status changed, -1 means not changed */
>--
>1.8.5.3
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
@ 2015-11-01 23:16 ` Thomas Monjalon
2015-11-02 13:33 ` Ananyev, Konstantin
0 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2015-11-01 23:16 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: dev
2015-10-27 12:51, Konstantin Ananyev:
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> app/test-pmd/cmdline.c | 48 +++++++++++++++++++++++++++++++
> app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++
> app/test-pmd/testpmd.h | 2 ++
> 3 files changed, 127 insertions(+)
Should we update the testpmd guide?
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 0/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-28 9:55 ` Remy Horton
@ 2015-11-01 23:17 ` Thomas Monjalon
0 siblings, 0 replies; 26+ messages in thread
From: Thomas Monjalon @ 2015-11-01 23:17 UTC (permalink / raw)
To: konstantin.ananyev; +Cc: dev
> > Konstantin Ananyev (9):
> > ethdev: add new API to retrieve RX/TX queue information
> > i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> > ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> > e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
> > fm10k: add HW specific desc_lim data into dev_info
> > cxgbe: add HW specific desc_lim data into dev_info
> > vmxnet3: add HW specific desc_lim data into dev_info
> > testpmd: add new command to display RX/TX queue information
> > doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
>
> Acked-by: Remy Horton <remy.horton@intel.com>
Applied, thanks
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 7/9] vmxnet3: add HW specific desc_lim data into dev_info
2015-10-31 8:54 ` Yong Wang
@ 2015-11-02 10:33 ` Ananyev, Konstantin
0 siblings, 0 replies; 26+ messages in thread
From: Ananyev, Konstantin @ 2015-11-02 10:33 UTC (permalink / raw)
To: Yong Wang, dev
Hi
> -----Original Message-----
> From: Yong Wang [mailto:yongwang@vmware.com]
> Sent: Saturday, October 31, 2015 8:55 AM
> To: Ananyev, Konstantin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCHv7 7/9] vmxnet3: add HW specific desc_lim data into dev_info
>
> On 10/27/15, 5:51 AM, "Konstantin Ananyev" <konstantin.ananyev@intel.com> wrote:
>
>
> >Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >---
>
> Acked-by: Yong Wang <yongwang@vmware.com>
>
> Do you plan to implement rxq_info_get and txq_info_get for vmxnet3 in subsequent patches?
I might, though my hope is that engineers who are familiar with particular PMDs will
pick it up and add support for (rxq|txq)_get_info() remaining PMDs.
So if you feel like that, please don't hesitate :)
Thanks
Konstantin
>
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 ++++++++++++
> > 1 file changed, 12 insertions(+)
> >
> >diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> >index a70be5c..3745b7d 100644
> >--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> >+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> >@@ -677,6 +677,18 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
> > dev_info->default_txconf.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
> > ETH_TXQ_FLAGS_NOOFFLOADS;
> > dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
> >+
> >+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
> >+ .nb_max = VMXNET3_RX_RING_MAX_SIZE,
> >+ .nb_min = VMXNET3_DEF_RX_RING_SIZE,
> >+ .nb_align = 1,
> >+ };
> >+
> >+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
> >+ .nb_max = VMXNET3_TX_RING_MAX_SIZE,
> >+ .nb_min = VMXNET3_DEF_TX_RING_SIZE,
> >+ .nb_align = 1,
> >+ };
> > }
> >
> > /* return 0 means link status changed, -1 means not changed */
> >--
> >1.8.5.3
> >
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information
2015-11-01 23:16 ` Thomas Monjalon
@ 2015-11-02 13:33 ` Ananyev, Konstantin
0 siblings, 0 replies; 26+ messages in thread
From: Ananyev, Konstantin @ 2015-11-02 13:33 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Sunday, November 01, 2015 11:16 PM
> To: Ananyev, Konstantin
> Cc: dev@dpdk.org; Mcnamara, John
> Subject: Re: [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information
>
> 2015-10-27 12:51, Konstantin Ananyev:
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> > app/test-pmd/cmdline.c | 48 +++++++++++++++++++++++++++++++
> > app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++++
> > app/test-pmd/testpmd.h | 2 ++
> > 3 files changed, 127 insertions(+)
>
> Should we update the testpmd guide?
Ah yes, forgot about that one.
Will send a separate patch then.
Thanks
Konstantin
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2015-11-02 13:33 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
2015-10-28 9:55 ` Remy Horton
2015-11-01 23:17 ` Thomas Monjalon
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 3/9] ixgbe: " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 4/9] e1000: " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 6/9] cxgbe: " Konstantin Ananyev
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 7/9] vmxnet3: " Konstantin Ananyev
2015-10-31 8:54 ` Yong Wang
2015-11-02 10:33 ` Ananyev, Konstantin
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
2015-11-01 23:16 ` Thomas Monjalon
2015-11-02 13:33 ` Ananyev, Konstantin
2015-10-27 12:51 ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 2/9] i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 3/9] ixgbe: " Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 4/9] e1000: " Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 5/9] fm10k: add HW specific desc_lim data into dev_info Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 6/9] cxgbe: " Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 7/9] vmxnet3: " Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 8/9] testpmd: add new command to display RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 ` [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get() Konstantin Ananyev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).