* [dpdk-dev] [PATCH 0/4] deferred queue setup
@ 2018-02-12 4:53 Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 1/4] ether: support " Qi Zhang
` (10 more replies)
0 siblings, 11 replies; 95+ messages in thread
From: Qi Zhang @ 2018-02-12 4:53 UTC (permalink / raw)
To: thomas
Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev, Qi Zhang
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Qi Zhang (4):
ether: support deferred queue setup
app/testpmd: add parameters for deferred queue setup
app/testpmd: add command for queue setup
net/i40e: enable deferred queue setup
app/test-pmd/cmdline.c | 136 ++++++++++++++++++++++++++++
app/test-pmd/parameters.c | 29 ++++++
app/test-pmd/testpmd.c | 8 +-
app/test-pmd/testpmd.h | 2 +
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/run_app.rst | 12 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
drivers/net/i40e/i40e_ethdev.c | 6 ++
drivers/net/i40e/i40e_rxtx.c | 62 ++++++++++++-
lib/librte_ether/rte_ethdev.c | 30 +++---
lib/librte_ether/rte_ethdev.h | 8 ++
11 files changed, 292 insertions(+), 16 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH 1/4] ether: support deferred queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
@ 2018-02-12 4:53 ` Qi Zhang
2018-02-12 13:55 ` Thomas Monjalon
2018-02-12 4:53 ` [dpdk-dev] [PATCH 2/4] app/testpmd: add parameters for " Qi Zhang
` (9 subsequent siblings)
10 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-02-12 4:53 UTC (permalink / raw)
To: thomas
Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 8 ++++++++
3 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..36ad21a1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_deferred_setup_capabilities:
+Queue deferred setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFERRED_TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELEASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index a6ce2a5ba..6c906c4df 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_RX_QUEUE_SETUP))
+ return -EINVAL;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
-ENOTSUP);
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_RX_QUEUE_RELEASE))
+ return -EINVAL;
(*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
rxq[rx_queue_id] = NULL;
}
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_TX_QUEUE_SETUP))
+ return -EINVAL;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
-ENOTSUP);
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_TX_QUEUE_RELEASE))
+ return -EINVAL;
(*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
txq[tx_queue_id] = NULL;
}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 036153306..6fc960c34 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,12 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+/** < Deferred queue setup / release capability */
+#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001
+#define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
+#define DEV_DEFERRED_RX_QUEUE_RELEASE 0x00000004
+#define DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1035,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t deferred_queue_config_capa;
+ /**< a queue can be setup/release after dev_start */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH 2/4] app/testpmd: add parameters for deferred queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 1/4] ether: support " Qi Zhang
@ 2018-02-12 4:53 ` Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 3/4] app/testpmd: add command for " Qi Zhang
` (8 subsequent siblings)
10 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-02-12 4:53 UTC (permalink / raw)
To: thomas
Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev, Qi Zhang
Add two parameters:
rxq-setup: set the number of RX queues be setup before device started
txq-setup: set the number of TX queues be setup before device started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/parameters.c | 29 +++++++++++++++++++++++++++++
app/test-pmd/testpmd.c | 8 ++++++--
app/test-pmd/testpmd.h | 2 ++
doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
4 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 97d22b860..497259ee7 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -146,8 +146,12 @@ usage(char* progname)
printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
printf(" --rxq=N: set the number of RX queues per port to N.\n");
+ printf(" --rxq-setup=N: set the number of RX queues be setup before"
+ "device start to N.\n");
printf(" --rxd=N: set the number of descriptors in RX rings to N.\n");
printf(" --txq=N: set the number of TX queues per port to N.\n");
+ printf(" --txq-setup=N: set the number of TX queues be setup before"
+ "device start to N.\n");
printf(" --txd=N: set the number of descriptors in TX rings to N.\n");
printf(" --burst=N: set the number of packets per burst to N.\n");
printf(" --mbcache=N: set the cache of mbuf memory pool to N.\n");
@@ -596,7 +600,9 @@ launch_args_parse(int argc, char** argv)
{ "rss-ip", 0, 0, 0 },
{ "rss-udp", 0, 0, 0 },
{ "rxq", 1, 0, 0 },
+ { "rxq-setup", 1, 0, 0 },
{ "txq", 1, 0, 0 },
+ { "txq-setup", 1, 0, 0 },
{ "rxd", 1, 0, 0 },
{ "txd", 1, 0, 0 },
{ "burst", 1, 0, 0 },
@@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
" >= 0 && <= %u\n", n,
get_allowed_max_nb_rxq(&pid));
}
+ if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
+ n = atoi(optarg);
+ if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
+ nb_rxq_setup = (queueid_t) n;
+ else
+ rte_exit(EXIT_FAILURE, "rxq-setup %d invalid - must be"
+ " >= 0 && <= %u\n", n,
+ get_allowed_max_nb_rxq(&pid));
+ }
if (!strcmp(lgopts[opt_idx].name, "txq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
@@ -942,6 +957,15 @@ launch_args_parse(int argc, char** argv)
" >= 0 && <= %u\n", n,
get_allowed_max_nb_txq(&pid));
}
+ if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
+ n = atoi(optarg);
+ if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
+ nb_txq_setup = (queueid_t) n;
+ else
+ rte_exit(EXIT_FAILURE, "txq-setup %d invalid - must be"
+ " >= 0 && <= %u\n", n,
+ get_allowed_max_nb_txq(&pid));
+ }
if (!nb_rxq && !nb_txq) {
rte_exit(EXIT_FAILURE, "Either rx or tx queues should "
"be non-zero\n");
@@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
/* Set offload configuration from command line parameters. */
rx_mode.offloads = rx_offloads;
tx_mode.offloads = tx_offloads;
+
+ if (nb_rxq_setup > nb_rxq)
+ nb_rxq_setup = nb_rxq;
+ if (nb_txq_setup > nb_txq)
+ nb_txq_setup = nb_txq;
}
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 46dc22c94..790e7359c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
*/
queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
queueid_t nb_txq = 1; /**< Number of TX queues per port. */
+queueid_t nb_rxq_setup = MAX_QUEUE_ID;
+/**< Number of RX queues per port start when dev_start. */
+queueid_t nb_txq_setup = MAX_QUEUE_ID;
+/**< Number of TX queues per port start when dev_start */
/*
* Configurable number of RX/TX ring descriptors.
@@ -1594,7 +1598,7 @@ start_port(portid_t pid)
/* Apply Tx offloads configuration */
port->tx_conf.offloads = port->dev_conf.txmode.offloads;
/* setup tx queues */
- for (qi = 0; qi < nb_txq; qi++) {
+ for (qi = 0; qi < nb_txq_setup; qi++) {
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,
@@ -1622,7 +1626,7 @@ start_port(portid_t pid)
/* Apply Rx offloads configuration */
port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
/* setup rx queues */
- for (qi = 0; qi < nb_rxq; qi++) {
+ for (qi = 0; qi < nb_rxq_setup; qi++) {
if ((numa_support) &&
(rxring_numa[pi] != NUMA_NO_CONFIG)) {
struct rte_mempool * mp =
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 153abea05..1a423eb8c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -373,6 +373,8 @@ extern uint64_t rss_hf;
extern queueid_t nb_rxq;
extern queueid_t nb_txq;
+extern queueid_t nb_rxq_setup;
+extern queueid_t nb_txq_setup;
extern uint16_t nb_rxd;
extern uint16_t nb_txd;
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 1fd53958a..63dbec407 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -354,6 +354,12 @@ The commandline options are:
Set the number of RX queues per port to N, where 1 <= N <= 65535.
The default value is 1.
+* ``--rxq-setup=N``
+
+ Set the number of RX queues will be setup before device started,
+ where 0 <= N <= 65535. The default value is rxq, if the number is
+ larger than rxq, it will be set to rxq automatically.
+
* ``--rxd=N``
Set the number of descriptors in the RX rings to N, where N > 0.
@@ -364,6 +370,12 @@ The commandline options are:
Set the number of TX queues per port to N, where 1 <= N <= 65535.
The default value is 1.
+* ``--txq-setup=N``
+
+ Set the number of TX queues will be setup before device started,
+ where 0 <= N <= 65535. The default value is rxq, if the number is
+ larger than txq, it will be set to txq automatically.
+
* ``--txd=N``
Set the number of descriptors in the TX rings to N, where N > 0.
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH 3/4] app/testpmd: add command for queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 1/4] ether: support " Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 2/4] app/testpmd: add parameters for " Qi Zhang
@ 2018-02-12 4:53 ` Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 4/4] net/i40e: enable deferred " Qi Zhang
` (7 subsequent siblings)
10 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-02-12 4:53 UTC (permalink / raw)
To: thomas
Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/cmdline.c | 136 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 143 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b4522f46a..b725f644d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size)\n"
+ " setup a not started queue or re-setup a started queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,138 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ if (numa_support &&
+ (rxring_numa[res->port_id] != NUMA_NO_CONFIG)) {
+ mp = mbuf_pool_find(rxring_numa[res->port_id]);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ rxring_numa[res->port_id],
+ &(port->rx_conf),
+ mp);
+ } else {
+ mp = mbuf_pool_find(port->socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue:"
+ "No mempool allocation"
+ " on the socket %d\n",
+ port->socket_id);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ port->socket_id,
+ &(port->rx_conf),
+ mp);
+ }
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ if ((numa_support) &&
+ (txring_numa[res->port_id] != NUMA_NO_CONFIG))
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ txring_numa[res->port_id],
+ &(port->tx_conf));
+ else
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ port->socket_id,
+ &(port->tx_conf));
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16407,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..74269cb03 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a not started queue or re-setup a started queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH 4/4] net/i40e: enable deferred queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (2 preceding siblings ...)
2018-02-12 4:53 ` [dpdk-dev] [PATCH 3/4] app/testpmd: add command for " Qi Zhang
@ 2018-02-12 4:53 ` Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
` (6 subsequent siblings)
10 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-02-12 4:53 UTC (permalink / raw)
To: thomas
Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev, Qi Zhang
Expose the deferred queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 6 ++++
drivers/net/i40e/i40e_rxtx.c | 62 ++++++++++++++++++++++++++++++++++++++++--
2 files changed, 66 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 06b0f03a1..843a0c42a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->deferred_queue_config_capa =
+ DEV_DEFERRED_RX_QUEUE_SETUP |
+ DEV_DEFERRED_TX_QUEUE_SETUP |
+ DEV_DEFERRED_RX_QUEUE_RELEASE |
+ DEV_DEFERRED_TX_QUEUE_RELEASE;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..e5f532cf7 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t len, i;
uint16_t reg_idx, base, bsf, tc_mapping;
int q_offset, use_def_burst_func = 1;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_rx_queue_init(rxq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return ret;
+ }
+ if (ad->rx_vec_allowed)
+ i40e_rxq_vec_setup(rxq);
+ if (!rxq->rx_deferred_start) {
+ ret = i40e_dev_rx_queue_start(dev, queue_idx);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to start RX queue");
+ return ret;
+ }
+ }
+ }
+
return 0;
}
@@ -1848,13 +1868,21 @@ void
i40e_dev_rx_queue_release(void *rxq)
{
struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq;
+ struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
return;
}
- i40e_rx_queue_release_mbufs(q);
+ if (dev->data->dev_started) {
+ if (dev->data->rx_queue_state[q->queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED)
+ i40e_dev_rx_queue_stop(dev, q->queue_id);
+ } else {
+ i40e_rx_queue_release_mbufs(q);
+ }
+
rte_free(q->sw_ring);
rte_free(q);
}
@@ -1980,6 +2008,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
struct i40e_vf *vf = NULL;
@@ -1989,6 +2019,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t tx_rs_thresh, tx_free_thresh;
uint16_t reg_idx, i, base, bsf, tc_mapping;
int q_offset;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -2162,6 +2193,25 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_tx_queue_init(txq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return ret;
+ }
+ if (ad->tx_vec_allowed)
+ i40e_txq_vec_setup(txq);
+ if (!txq->tx_deferred_start) {
+ ret = i40e_dev_tx_queue_start(dev, queue_idx);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to start TX queue");
+ return ret;
+ }
+ }
+ }
+
return 0;
}
@@ -2169,13 +2219,21 @@ void
i40e_dev_tx_queue_release(void *txq)
{
struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
return;
}
- i40e_tx_queue_release_mbufs(q);
+ if (dev->data->dev_started) {
+ if (dev->data->tx_queue_state[q->queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED)
+ i40e_dev_tx_queue_stop(dev, q->queue_id);
+ } else {
+ i40e_tx_queue_release_mbufs(q);
+ }
+
rte_free(q->sw_ring);
rte_free(q);
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ether: support deferred queue setup
2018-02-12 4:53 ` [dpdk-dev] [PATCH 1/4] ether: support " Qi Zhang
@ 2018-02-12 13:55 ` Thomas Monjalon
0 siblings, 0 replies; 95+ messages in thread
From: Thomas Monjalon @ 2018-02-12 13:55 UTC (permalink / raw)
To: Qi Zhang; +Cc: dev, jingjing.wu, beilei.xing, arybchenko, konstantin.ananyev
As a general comment, please review wording and explanations of this patchset.
12/02/2018 05:53, Qi Zhang:
> +/** < Deferred queue setup / release capability */
> +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001
> +#define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
> +#define DEV_DEFERRED_RX_QUEUE_RELEASE 0x00000004
> +#define DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008
Please document each value.
> @@ -1029,6 +1035,8 @@ struct rte_eth_dev_info {
> /** Configured number of rx/tx queues */
> uint16_t nb_rx_queues; /**< Number of RX queues. */
> uint16_t nb_tx_queues; /**< Number of TX queues. */
> + uint64_t deferred_queue_config_capa;
> + /**< a queue can be setup/release after dev_start */
Please refer to DEV_DEFERRED_* flags.
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v2 0/4] deferred queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (3 preceding siblings ...)
2018-02-12 4:53 ` [dpdk-dev] [PATCH 4/4] net/i40e: enable deferred " Qi Zhang
@ 2018-03-02 4:13 ` Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 1/4] ether: support " Qi Zhang
` (3 more replies)
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
` (5 subsequent siblings)
10 siblings, 4 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-02 4:13 UTC (permalink / raw)
To: thomas; +Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
v2:
- enhance comment in rte_ethdev.h
Qi Zhang (4):
ether: support deferred queue setup
app/testpmd: add parameters for deferred queue setup
app/testpmd: add command for queue setup
net/i40e: enable deferred queue setup
app/test-pmd/cmdline.c | 136 ++++++++++++++++++++++++++++
app/test-pmd/parameters.c | 29 ++++++
app/test-pmd/testpmd.c | 8 +-
app/test-pmd/testpmd.h | 2 +
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/run_app.rst | 12 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
drivers/net/i40e/i40e_ethdev.c | 6 ++
drivers/net/i40e/i40e_rxtx.c | 62 ++++++++++++-
lib/librte_ether/rte_ethdev.c | 30 +++---
lib/librte_ether/rte_ethdev.h | 11 +++
11 files changed, 295 insertions(+), 16 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
@ 2018-03-02 4:13 ` Qi Zhang
2018-03-14 12:31 ` Ananyev, Konstantin
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for " Qi Zhang
` (2 subsequent siblings)
3 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-02 4:13 UTC (permalink / raw)
To: thomas; +Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 11 +++++++++++
3 files changed, 37 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..36ad21a1f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_deferred_setup_capabilities:
+Queue deferred setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFERRED_TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELEASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index a6ce2a5ba..6c906c4df 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_RX_QUEUE_SETUP))
+ return -EINVAL;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
-ENOTSUP);
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_RX_QUEUE_RELEASE))
+ return -EINVAL;
(*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
rxq[rx_queue_id] = NULL;
}
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_TX_QUEUE_SETUP))
+ return -EINVAL;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
-ENOTSUP);
+ if (dev->data->dev_started &&
+ !(dev_info.deferred_queue_config_capa &
+ DEV_DEFERRED_TX_QUEUE_RELEASE))
+ return -EINVAL;
(*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
txq[tx_queue_id] = NULL;
}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 036153306..410e58c50 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,15 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001
+/**< Deferred setup rx queue */
+#define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
+/**< Deferred setup tx queue */
+#define DEV_DEFERRED_RX_QUEUE_RELEASE 0x00000004
+/**< Deferred release rx queue */
+#define DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008
+/**< Deferred release tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t deferred_queue_config_capa;
+ /**< queues can be setup/release after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 1/4] ether: support " Qi Zhang
@ 2018-03-02 4:13 ` Qi Zhang
2018-03-14 17:38 ` Ananyev, Konstantin
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred " Qi Zhang
3 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-02 4:13 UTC (permalink / raw)
To: thomas; +Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add two parameters:
rxq-setup: set the number of RX queues be setup before device started
txq-setup: set the number of TX queues be setup before device started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/parameters.c | 29 +++++++++++++++++++++++++++++
app/test-pmd/testpmd.c | 8 ++++++--
app/test-pmd/testpmd.h | 2 ++
doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
4 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 97d22b860..497259ee7 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -146,8 +146,12 @@ usage(char* progname)
printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
printf(" --rxq=N: set the number of RX queues per port to N.\n");
+ printf(" --rxq-setup=N: set the number of RX queues be setup before"
+ "device start to N.\n");
printf(" --rxd=N: set the number of descriptors in RX rings to N.\n");
printf(" --txq=N: set the number of TX queues per port to N.\n");
+ printf(" --txq-setup=N: set the number of TX queues be setup before"
+ "device start to N.\n");
printf(" --txd=N: set the number of descriptors in TX rings to N.\n");
printf(" --burst=N: set the number of packets per burst to N.\n");
printf(" --mbcache=N: set the cache of mbuf memory pool to N.\n");
@@ -596,7 +600,9 @@ launch_args_parse(int argc, char** argv)
{ "rss-ip", 0, 0, 0 },
{ "rss-udp", 0, 0, 0 },
{ "rxq", 1, 0, 0 },
+ { "rxq-setup", 1, 0, 0 },
{ "txq", 1, 0, 0 },
+ { "txq-setup", 1, 0, 0 },
{ "rxd", 1, 0, 0 },
{ "txd", 1, 0, 0 },
{ "burst", 1, 0, 0 },
@@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
" >= 0 && <= %u\n", n,
get_allowed_max_nb_rxq(&pid));
}
+ if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
+ n = atoi(optarg);
+ if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
+ nb_rxq_setup = (queueid_t) n;
+ else
+ rte_exit(EXIT_FAILURE, "rxq-setup %d invalid - must be"
+ " >= 0 && <= %u\n", n,
+ get_allowed_max_nb_rxq(&pid));
+ }
if (!strcmp(lgopts[opt_idx].name, "txq")) {
n = atoi(optarg);
if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
@@ -942,6 +957,15 @@ launch_args_parse(int argc, char** argv)
" >= 0 && <= %u\n", n,
get_allowed_max_nb_txq(&pid));
}
+ if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
+ n = atoi(optarg);
+ if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
+ nb_txq_setup = (queueid_t) n;
+ else
+ rte_exit(EXIT_FAILURE, "txq-setup %d invalid - must be"
+ " >= 0 && <= %u\n", n,
+ get_allowed_max_nb_txq(&pid));
+ }
if (!nb_rxq && !nb_txq) {
rte_exit(EXIT_FAILURE, "Either rx or tx queues should "
"be non-zero\n");
@@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
/* Set offload configuration from command line parameters. */
rx_mode.offloads = rx_offloads;
tx_mode.offloads = tx_offloads;
+
+ if (nb_rxq_setup > nb_rxq)
+ nb_rxq_setup = nb_rxq;
+ if (nb_txq_setup > nb_txq)
+ nb_txq_setup = nb_txq;
}
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 46dc22c94..790e7359c 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
*/
queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
queueid_t nb_txq = 1; /**< Number of TX queues per port. */
+queueid_t nb_rxq_setup = MAX_QUEUE_ID;
+/**< Number of RX queues per port start when dev_start. */
+queueid_t nb_txq_setup = MAX_QUEUE_ID;
+/**< Number of TX queues per port start when dev_start */
/*
* Configurable number of RX/TX ring descriptors.
@@ -1594,7 +1598,7 @@ start_port(portid_t pid)
/* Apply Tx offloads configuration */
port->tx_conf.offloads = port->dev_conf.txmode.offloads;
/* setup tx queues */
- for (qi = 0; qi < nb_txq; qi++) {
+ for (qi = 0; qi < nb_txq_setup; qi++) {
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,
@@ -1622,7 +1626,7 @@ start_port(portid_t pid)
/* Apply Rx offloads configuration */
port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
/* setup rx queues */
- for (qi = 0; qi < nb_rxq; qi++) {
+ for (qi = 0; qi < nb_rxq_setup; qi++) {
if ((numa_support) &&
(rxring_numa[pi] != NUMA_NO_CONFIG)) {
struct rte_mempool * mp =
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 153abea05..1a423eb8c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -373,6 +373,8 @@ extern uint64_t rss_hf;
extern queueid_t nb_rxq;
extern queueid_t nb_txq;
+extern queueid_t nb_rxq_setup;
+extern queueid_t nb_txq_setup;
extern uint16_t nb_rxd;
extern uint16_t nb_txd;
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 1fd53958a..63dbec407 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -354,6 +354,12 @@ The commandline options are:
Set the number of RX queues per port to N, where 1 <= N <= 65535.
The default value is 1.
+* ``--rxq-setup=N``
+
+ Set the number of RX queues will be setup before device started,
+ where 0 <= N <= 65535. The default value is rxq, if the number is
+ larger than rxq, it will be set to rxq automatically.
+
* ``--rxd=N``
Set the number of descriptors in the RX rings to N, where N > 0.
@@ -364,6 +370,12 @@ The commandline options are:
Set the number of TX queues per port to N, where 1 <= N <= 65535.
The default value is 1.
+* ``--txq-setup=N``
+
+ Set the number of TX queues will be setup before device started,
+ where 0 <= N <= 65535. The default value is rxq, if the number is
+ larger than txq, it will be set to txq automatically.
+
* ``--txd=N``
Set the number of descriptors in the TX rings to N, where N > 0.
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 1/4] ether: support " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for " Qi Zhang
@ 2018-03-02 4:13 ` Qi Zhang
2018-03-14 17:36 ` Ananyev, Konstantin
2018-03-14 17:41 ` Ananyev, Konstantin
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred " Qi Zhang
3 siblings, 2 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-02 4:13 UTC (permalink / raw)
To: thomas; +Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/cmdline.c | 136 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 143 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b4522f46a..b725f644d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size)\n"
+ " setup a not started queue or re-setup a started queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,138 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ if (numa_support &&
+ (rxring_numa[res->port_id] != NUMA_NO_CONFIG)) {
+ mp = mbuf_pool_find(rxring_numa[res->port_id]);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ rxring_numa[res->port_id],
+ &(port->rx_conf),
+ mp);
+ } else {
+ mp = mbuf_pool_find(port->socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue:"
+ "No mempool allocation"
+ " on the socket %d\n",
+ port->socket_id);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ port->socket_id,
+ &(port->rx_conf),
+ mp);
+ }
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ if ((numa_support) &&
+ (txring_numa[res->port_id] != NUMA_NO_CONFIG))
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ txring_numa[res->port_id],
+ &(port->tx_conf));
+ else
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ port->socket_id,
+ &(port->tx_conf));
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16407,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..74269cb03 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a not started queue or re-setup a started queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
` (2 preceding siblings ...)
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for " Qi Zhang
@ 2018-03-02 4:13 ` Qi Zhang
2018-03-14 12:35 ` Ananyev, Konstantin
3 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-02 4:13 UTC (permalink / raw)
To: thomas; +Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Expose the deferred queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 6 ++++
drivers/net/i40e/i40e_rxtx.c | 62 ++++++++++++++++++++++++++++++++++++++++--
2 files changed, 66 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 06b0f03a1..843a0c42a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->deferred_queue_config_capa =
+ DEV_DEFERRED_RX_QUEUE_SETUP |
+ DEV_DEFERRED_TX_QUEUE_SETUP |
+ DEV_DEFERRED_RX_QUEUE_RELEASE |
+ DEV_DEFERRED_TX_QUEUE_RELEASE;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..e5f532cf7 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t len, i;
uint16_t reg_idx, base, bsf, tc_mapping;
int q_offset, use_def_burst_func = 1;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_rx_queue_init(rxq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return ret;
+ }
+ if (ad->rx_vec_allowed)
+ i40e_rxq_vec_setup(rxq);
+ if (!rxq->rx_deferred_start) {
+ ret = i40e_dev_rx_queue_start(dev, queue_idx);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to start RX queue");
+ return ret;
+ }
+ }
+ }
+
return 0;
}
@@ -1848,13 +1868,21 @@ void
i40e_dev_rx_queue_release(void *rxq)
{
struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq;
+ struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
return;
}
- i40e_rx_queue_release_mbufs(q);
+ if (dev->data->dev_started) {
+ if (dev->data->rx_queue_state[q->queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED)
+ i40e_dev_rx_queue_stop(dev, q->queue_id);
+ } else {
+ i40e_rx_queue_release_mbufs(q);
+ }
+
rte_free(q->sw_ring);
rte_free(q);
}
@@ -1980,6 +2008,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
struct i40e_vf *vf = NULL;
@@ -1989,6 +2019,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t tx_rs_thresh, tx_free_thresh;
uint16_t reg_idx, i, base, bsf, tc_mapping;
int q_offset;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -2162,6 +2193,25 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_tx_queue_init(txq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return ret;
+ }
+ if (ad->tx_vec_allowed)
+ i40e_txq_vec_setup(txq);
+ if (!txq->tx_deferred_start) {
+ ret = i40e_dev_tx_queue_start(dev, queue_idx);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to start TX queue");
+ return ret;
+ }
+ }
+ }
+
return 0;
}
@@ -2169,13 +2219,21 @@ void
i40e_dev_tx_queue_release(void *txq)
{
struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
+ struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
if (!q) {
PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
return;
}
- i40e_tx_queue_release_mbufs(q);
+ if (dev->data->dev_started) {
+ if (dev->data->tx_queue_state[q->queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED)
+ i40e_dev_tx_queue_stop(dev, q->queue_id);
+ } else {
+ i40e_tx_queue_release_mbufs(q);
+ }
+
rte_free(q->sw_ring);
rte_free(q);
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 1/4] ether: support " Qi Zhang
@ 2018-03-14 12:31 ` Ananyev, Konstantin
2018-03-15 3:13 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-14 12:31 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
Hi Qi,
>
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support deferred queue configuraiton,
> then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> doc/guides/nics/features.rst | 8 ++++++++
> lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> lib/librte_ether/rte_ethdev.h | 11 +++++++++++
> 3 files changed, 37 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 1b4fb979f..36ad21a1f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -892,7 +892,15 @@ Documentation describes performance values.
>
> See ``dpdk.org/doc/perf/*``.
>
> +.. _nic_features_queue_deferred_setup_capabilities:
>
> +Queue deferred setup capabilities
> +---------------------------------
> +
> +Supports queue setup / release after device started.
> +
> +* **[provides] rte_eth_dev_info**:
> ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFERRED_TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
>
> .. _nic_features_other:
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index a6ce2a5ba..6c906c4df 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
>
> @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.deferred_queue_config_capa &
> + DEV_DEFERRED_RX_QUEUE_SETUP))
> + return -EINVAL;
> +
I think now you have to check here that the queue is stopped.
Otherwise you might attempt to reconfigure running queue.
> rxq = dev->data->rx_queues;
> if (rxq[rx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> -ENOTSUP);
I don't think it is *that* straightforward.
rx_queue_setup() parameters can imply different rx function (and related dev icesettings)
that are already setuped by previous queue_setup()/dev_start.
So I think you need to do one of 2 things:
1. rework ethdev layer to introduce a separate rx function (and related settings) for each queue.
2. at rx_queue_setup() if it is invoked after dev_start - check that given queue settings wouldn't
contradict with current device settings (rx function, etc.).
If they do - return an error.
>From my perspective - 1) is a better choice though it required more work, and possibly ABI breakage.
I did some work in that direction as RFC:
http://dpdk.org/dev/patchwork/patch/31866/
2) might be also possible, but looks a bit clumsy as rx_queue_setup() might now fail even with
valid parameters - all depends on previous queue configurations.
Same story applies for TX.
> + if (dev->data->dev_started &&
> + !(dev_info.deferred_queue_config_capa &
> + DEV_DEFERRED_RX_QUEUE_RELEASE))
> + return -EINVAL;
> (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> rxq[rx_queue_id] = NULL;
> }
> @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
>
> @@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.deferred_queue_config_capa &
> + DEV_DEFERRED_TX_QUEUE_SETUP))
> + return -EINVAL;
> +
> txq = dev->data->tx_queues;
> if (txq[tx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> -ENOTSUP);
> + if (dev->data->dev_started &&
> + !(dev_info.deferred_queue_config_capa &
> + DEV_DEFERRED_TX_QUEUE_RELEASE))
> + return -EINVAL;
> (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
> txq[tx_queue_id] = NULL;
> }
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 036153306..410e58c50 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -981,6 +981,15 @@ struct rte_eth_conf {
> */
> #define DEV_TX_OFFLOAD_SECURITY 0x00020000
>
> +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001
> +/**< Deferred setup rx queue */
> +#define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
> +/**< Deferred setup tx queue */
> +#define DEV_DEFERRED_RX_QUEUE_RELEASE 0x00000004
> +/**< Deferred release rx queue */
> +#define DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008
> +/**< Deferred release tx queue */
> +
I don't think we do need flags for both setup a and release.
If runtime setup is supported - surely dynamic release should be supported too.
Also probably RUNTIME_RX_QUEUE_SETUP sounds a bit better.
Konstantin
> /*
> * If new Tx offload capabilities are defined, they also must be
> * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> @@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
> /** Configured number of rx/tx queues */
> uint16_t nb_rx_queues; /**< Number of RX queues. */
> uint16_t nb_tx_queues; /**< Number of TX queues. */
> + uint64_t deferred_queue_config_capa;
> + /**< queues can be setup/release after dev_start (DEV_DEFERRED_). */
> };
>
> /**
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred " Qi Zhang
@ 2018-03-14 12:35 ` Ananyev, Konstantin
2018-03-15 3:22 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-14 12:35 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> Sent: Friday, March 2, 2018 4:13 AM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
>
> Expose the deferred queue configuration capability and enhance
> i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
> device already started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 6 ++++
> drivers/net/i40e/i40e_rxtx.c | 62 ++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 06b0f03a1..843a0c42a 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> DEV_TX_OFFLOAD_GRE_TNL_TSO |
> DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> + dev_info->deferred_queue_config_capa =
> + DEV_DEFERRED_RX_QUEUE_SETUP |
> + DEV_DEFERRED_TX_QUEUE_SETUP |
> + DEV_DEFERRED_RX_QUEUE_RELEASE |
> + DEV_DEFERRED_TX_QUEUE_RELEASE;
> +
> dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t);
> dev_info->reta_size = pf->hash_lut_size;
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 1217e5a61..e5f532cf7 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t len, i;
> uint16_t reg_idx, base, bsf, tc_mapping;
> int q_offset, use_def_burst_func = 1;
> + int ret = 0;
>
> if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
> vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
> rxq->dcb_tc = i;
> }
>
> + if (dev->data->dev_started) {
> + ret = i40e_rx_queue_init(rxq);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to do RX queue initialization");
> + return ret;
> + }
> + if (ad->rx_vec_allowed)
Better to check what rx function is installed right now.
> + i40e_rxq_vec_setup(rxq);
> + if (!rxq->rx_deferred_start) {
> + ret = i40e_dev_rx_queue_start(dev, queue_idx);
I don't think it is a good idea to start/stop queue inside queue_setup/queue_release.
There is special API (queue_start/queue_stop) to do this.
Konstantin
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to start RX queue");
> + return ret;
> + }
> + }
> + }
> +
> return 0;
> }
>
> @@ -1848,13 +1868,21 @@ void
> i40e_dev_rx_queue_release(void *rxq)
> {
> struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq;
> + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
>
> if (!q) {
> PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
> return;
> }
>
> - i40e_rx_queue_release_mbufs(q);
> + if (dev->data->dev_started) {
> + if (dev->data->rx_queue_state[q->queue_id] ==
> + RTE_ETH_QUEUE_STATE_STARTED)
> + i40e_dev_rx_queue_stop(dev, q->queue_id);
> + } else {
> + i40e_rx_queue_release_mbufs(q);
> + }
> +
> rte_free(q->sw_ring);
> rte_free(q);
> }
> @@ -1980,6 +2008,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> const struct rte_eth_txconf *tx_conf)
> {
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct i40e_adapter *ad =
> + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> struct i40e_vsi *vsi;
> struct i40e_pf *pf = NULL;
> struct i40e_vf *vf = NULL;
> @@ -1989,6 +2019,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> uint16_t tx_rs_thresh, tx_free_thresh;
> uint16_t reg_idx, i, base, bsf, tc_mapping;
> int q_offset;
> + int ret = 0;
>
> if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
> vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> @@ -2162,6 +2193,25 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> txq->dcb_tc = i;
> }
>
> + if (dev->data->dev_started) {
> + ret = i40e_tx_queue_init(txq);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to do TX queue initialization");
> + return ret;
> + }
> + if (ad->tx_vec_allowed)
> + i40e_txq_vec_setup(txq);
> + if (!txq->tx_deferred_start) {
> + ret = i40e_dev_tx_queue_start(dev, queue_idx);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to start TX queue");
> + return ret;
> + }
> + }
> + }
> +
> return 0;
> }
>
> @@ -2169,13 +2219,21 @@ void
> i40e_dev_tx_queue_release(void *txq)
> {
> struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
> + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
>
> if (!q) {
> PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
> return;
> }
>
> - i40e_tx_queue_release_mbufs(q);
> + if (dev->data->dev_started) {
> + if (dev->data->tx_queue_state[q->queue_id] ==
> + RTE_ETH_QUEUE_STATE_STARTED)
> + i40e_dev_tx_queue_stop(dev, q->queue_id);
> + } else {
> + i40e_tx_queue_release_mbufs(q);
> + }
> +
> rte_free(q->sw_ring);
> rte_free(q);
> }
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for " Qi Zhang
@ 2018-03-14 17:36 ` Ananyev, Konstantin
2018-03-14 17:41 ` Ananyev, Konstantin
1 sibling, 0 replies; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-14 17:36 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> Sent: Friday, March 2, 2018 4:13 AM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for queue setup
>
> Add new command to setup queue:
> queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
>
> rte_eth_[rx|tx]_queue_setup will be called corresponsively
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> app/test-pmd/cmdline.c | 136 ++++++++++++++++++++++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
> 2 files changed, 143 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index b4522f46a..b725f644d 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
> "port tm hierarchy commit (port_id) (clean_on_fail)\n"
> " Commit tm hierarchy.\n\n"
>
> + "queue setup (rx|tx) (port_id) (queue_idx) (ring_size)\n"
> + " setup a not started queue or re-setup a started queue.\n\n"
> +
> , list_pkt_forwarding_modes()
> );
> }
> @@ -16030,6 +16033,138 @@ cmdline_parse_inst_t cmd_load_from_file = {
> },
> };
>
> +/* Queue Setup */
> +
> +/* Common result structure for queue setup */
> +struct cmd_queue_setup_result {
> + cmdline_fixed_string_t queue;
> + cmdline_fixed_string_t setup;
> + cmdline_fixed_string_t rxtx;
> + portid_t port_id;
> + uint16_t queue_idx;
> + uint16_t ring_size;
> +};
> +
> +/* Common CLI fields for queue setup */
> +cmdline_parse_token_string_t cmd_queue_setup_queue =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
> +cmdline_parse_token_string_t cmd_queue_setup_setup =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
> +cmdline_parse_token_string_t cmd_queue_setup_rxtx =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
> +cmdline_parse_token_num_t cmd_queue_setup_port_id =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
> +cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
> +cmdline_parse_token_num_t cmd_queue_setup_ring_size =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
> +
> +static void
> +cmd_queue_setup_parsed(
> + void *parsed_result,
> + __attribute__((unused)) struct cmdline *cl,
> + __attribute__((unused)) void *data)
> +{
> + struct cmd_queue_setup_result *res = parsed_result;
> + struct rte_port *port;
> + struct rte_mempool *mp;
> + uint8_t rx = 1;
> + int ret;
> +
> + if (port_id_is_invalid(res->port_id, ENABLED_WARN))
> + return;
> +
> + if (!strcmp(res->rxtx, "tx"))
> + rx = 0;
> +
> + if (rx && res->ring_size <= rx_free_thresh) {
> + printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
> + rx_free_thresh);
> + return;
> + }
> +
> + if (rx && res->queue_idx >= nb_rxq) {
> + printf("Invalid rx queue index, must < nb_rxq: %d\n",
> + nb_rxq);
> + return;
> + }
> +
> + if (!rx && res->queue_idx >= nb_txq) {
> + printf("Invalid tx queue index, must < nb_txq: %d\n",
> + nb_txq);
> + return;
> + }
> +
> + port = &ports[res->port_id];
> + if (rx) {
> + if (numa_support &&
> + (rxring_numa[res->port_id] != NUMA_NO_CONFIG)) {
> + mp = mbuf_pool_find(rxring_numa[res->port_id]);
> + if (mp == NULL) {
> + printf("Failed to setup RX queue: "
> + "No mempool allocation"
> + " on the socket %d\n",
> + rxring_numa[res->port_id]);
> + return;
> + }
> + ret = rte_eth_rx_queue_setup(res->port_id,
> + res->queue_idx,
> + res->ring_size,
> + rxring_numa[res->port_id],
> + &(port->rx_conf),
> + mp);
You can probably reorder that code a bit to mimimize code duplication:
If (numa_support ....) {
mp = ...;
rx_conf = ...;
} else {
mp = ...;
rx_conf = ...;
}
If (mp == NILL) {....}
ret = rte_eth_rx_queue_setup(..., rx_conf, mp);
Same for TX.
Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for " Qi Zhang
@ 2018-03-14 17:38 ` Ananyev, Konstantin
2018-03-15 3:58 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-14 17:38 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> Sent: Friday, March 2, 2018 4:13 AM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
>
> Add two parameters:
> rxq-setup: set the number of RX queues be setup before device started
> txq-setup: set the number of TX queues be setup before device started.
Not sure we do need these new parameters at all - in next patch
you introduce ability to do queue_setup from command-line.
Plus we already have an ability to do queue_stop/queue_start from command-line.
I think that would be enough for testing.
Konstantin
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> app/test-pmd/parameters.c | 29 +++++++++++++++++++++++++++++
> app/test-pmd/testpmd.c | 8 ++++++--
> app/test-pmd/testpmd.h | 2 ++
> doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
> 4 files changed, 49 insertions(+), 2 deletions(-)
>
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 97d22b860..497259ee7 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -146,8 +146,12 @@ usage(char* progname)
> printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
> printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
> printf(" --rxq=N: set the number of RX queues per port to N.\n");
> + printf(" --rxq-setup=N: set the number of RX queues be setup before"
> + "device start to N.\n");
> printf(" --rxd=N: set the number of descriptors in RX rings to N.\n");
> printf(" --txq=N: set the number of TX queues per port to N.\n");
> + printf(" --txq-setup=N: set the number of TX queues be setup before"
> + "device start to N.\n");
> printf(" --txd=N: set the number of descriptors in TX rings to N.\n");
> printf(" --burst=N: set the number of packets per burst to N.\n");
> printf(" --mbcache=N: set the cache of mbuf memory pool to N.\n");
> @@ -596,7 +600,9 @@ launch_args_parse(int argc, char** argv)
> { "rss-ip", 0, 0, 0 },
> { "rss-udp", 0, 0, 0 },
> { "rxq", 1, 0, 0 },
> + { "rxq-setup", 1, 0, 0 },
> { "txq", 1, 0, 0 },
> + { "txq-setup", 1, 0, 0 },
> { "rxd", 1, 0, 0 },
> { "txd", 1, 0, 0 },
> { "burst", 1, 0, 0 },
> @@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
> " >= 0 && <= %u\n", n,
> get_allowed_max_nb_rxq(&pid));
> }
> + if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
> + n = atoi(optarg);
> + if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
> + nb_rxq_setup = (queueid_t) n;
> + else
> + rte_exit(EXIT_FAILURE, "rxq-setup %d invalid - must be"
> + " >= 0 && <= %u\n", n,
> + get_allowed_max_nb_rxq(&pid));
> + }
> if (!strcmp(lgopts[opt_idx].name, "txq")) {
> n = atoi(optarg);
> if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
> @@ -942,6 +957,15 @@ launch_args_parse(int argc, char** argv)
> " >= 0 && <= %u\n", n,
> get_allowed_max_nb_txq(&pid));
> }
> + if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
> + n = atoi(optarg);
> + if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
> + nb_txq_setup = (queueid_t) n;
> + else
> + rte_exit(EXIT_FAILURE, "txq-setup %d invalid - must be"
> + " >= 0 && <= %u\n", n,
> + get_allowed_max_nb_txq(&pid));
> + }
> if (!nb_rxq && !nb_txq) {
> rte_exit(EXIT_FAILURE, "Either rx or tx queues should "
> "be non-zero\n");
> @@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
> /* Set offload configuration from command line parameters. */
> rx_mode.offloads = rx_offloads;
> tx_mode.offloads = tx_offloads;
> +
> + if (nb_rxq_setup > nb_rxq)
> + nb_rxq_setup = nb_rxq;
> + if (nb_txq_setup > nb_txq)
> + nb_txq_setup = nb_txq;
> }
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 46dc22c94..790e7359c 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
> */
> queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
> queueid_t nb_txq = 1; /**< Number of TX queues per port. */
> +queueid_t nb_rxq_setup = MAX_QUEUE_ID;
> +/**< Number of RX queues per port start when dev_start. */
> +queueid_t nb_txq_setup = MAX_QUEUE_ID;
> +/**< Number of TX queues per port start when dev_start */
>
> /*
> * Configurable number of RX/TX ring descriptors.
> @@ -1594,7 +1598,7 @@ start_port(portid_t pid)
> /* Apply Tx offloads configuration */
> port->tx_conf.offloads = port->dev_conf.txmode.offloads;
> /* setup tx queues */
> - for (qi = 0; qi < nb_txq; qi++) {
> + for (qi = 0; qi < nb_txq_setup; qi++) {
> if ((numa_support) &&
> (txring_numa[pi] != NUMA_NO_CONFIG))
> diag = rte_eth_tx_queue_setup(pi, qi,
> @@ -1622,7 +1626,7 @@ start_port(portid_t pid)
> /* Apply Rx offloads configuration */
> port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
> /* setup rx queues */
> - for (qi = 0; qi < nb_rxq; qi++) {
> + for (qi = 0; qi < nb_rxq_setup; qi++) {
> if ((numa_support) &&
> (rxring_numa[pi] != NUMA_NO_CONFIG)) {
> struct rte_mempool * mp =
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 153abea05..1a423eb8c 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -373,6 +373,8 @@ extern uint64_t rss_hf;
>
> extern queueid_t nb_rxq;
> extern queueid_t nb_txq;
> +extern queueid_t nb_rxq_setup;
> +extern queueid_t nb_txq_setup;
>
> extern uint16_t nb_rxd;
> extern uint16_t nb_txd;
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
> index 1fd53958a..63dbec407 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -354,6 +354,12 @@ The commandline options are:
> Set the number of RX queues per port to N, where 1 <= N <= 65535.
> The default value is 1.
>
> +* ``--rxq-setup=N``
> +
> + Set the number of RX queues will be setup before device started,
> + where 0 <= N <= 65535. The default value is rxq, if the number is
> + larger than rxq, it will be set to rxq automatically.
> +
> * ``--rxd=N``
>
> Set the number of descriptors in the RX rings to N, where N > 0.
> @@ -364,6 +370,12 @@ The commandline options are:
> Set the number of TX queues per port to N, where 1 <= N <= 65535.
> The default value is 1.
>
> +* ``--txq-setup=N``
> +
> + Set the number of TX queues will be setup before device started,
> + where 0 <= N <= 65535. The default value is rxq, if the number is
> + larger than txq, it will be set to txq automatically.
> +
> * ``--txd=N``
>
> Set the number of descriptors in the TX rings to N, where N > 0.
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for queue setup
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for " Qi Zhang
2018-03-14 17:36 ` Ananyev, Konstantin
@ 2018-03-14 17:41 ` Ananyev, Konstantin
2018-03-15 3:59 ` Zhang, Qi Z
1 sibling, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-14 17:41 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
> +}
> +
> +cmdline_parse_inst_t cmd_queue_setup = {
> + .f = cmd_queue_setup_parsed,
> + .data = NULL,
> + .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>",
It probably would be good to add an ability to specify rx/tx queue offloads by this command.
Konstantin
> + .tokens = {
> + (void *)&cmd_queue_setup_queue,
> + (void *)&cmd_queue_setup_setup,
> + (void *)&cmd_queue_setup_rxtx,
> + (void *)&cmd_queue_setup_port_id,
> + (void *)&cmd_queue_setup_queue_idx,
> + (void *)&cmd_queue_setup_ring_size,
> + NULL,
> + },
> +};
> +
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-14 12:31 ` Ananyev, Konstantin
@ 2018-03-15 3:13 ` Zhang, Qi Z
2018-03-15 13:16 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 3:13 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
Hi Konstantin:
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, March 14, 2018 8:32 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
>
> Hi Qi,
>
> >
> > The patch let etherdev driver expose the capability flag through
> > rte_eth_dev_info_get when it support deferred queue configuraiton,
> > then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> > continue to setup the queue or just return fail when device already
> > started.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > doc/guides/nics/features.rst | 8 ++++++++
> > lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> > lib/librte_ether/rte_ethdev.h | 11 +++++++++++
> > 3 files changed, 37 insertions(+), 12 deletions(-)
> >
> > diff --git a/doc/guides/nics/features.rst
> > b/doc/guides/nics/features.rst index 1b4fb979f..36ad21a1f 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -892,7 +892,15 @@ Documentation describes performance values.
> >
> > See ``dpdk.org/doc/perf/*``.
> >
> > +.. _nic_features_queue_deferred_setup_capabilities:
> >
> > +Queue deferred setup capabilities
> > +---------------------------------
> > +
> > +Supports queue setup / release after device started.
> > +
> > +* **[provides] rte_eth_dev_info**:
> >
> ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFE
> RRED_
> > TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> > ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> > +* **[related] API**: ``rte_eth_dev_info_get()``.
> >
> > .. _nic_features_other:
> >
> > diff --git a/lib/librte_ether/rte_ethdev.c
> > b/lib/librte_ether/rte_ethdev.c index a6ce2a5ba..6c906c4df 100644
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> > @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> uint16_t rx_queue_id,
> > return -EINVAL;
> > }
> >
> > - if (dev->data->dev_started) {
> > - RTE_PMD_DEBUG_TRACE(
> > - "port %d must be stopped to allow configuration\n", port_id);
> > - return -EBUSY;
> > - }
> > -
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> -ENOTSUP);
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> -ENOTSUP);
> >
> > @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> uint16_t rx_queue_id,
> > return -EINVAL;
> > }
> >
> > + if (dev->data->dev_started &&
> > + !(dev_info.deferred_queue_config_capa &
> > + DEV_DEFERRED_RX_QUEUE_SETUP))
> > + return -EINVAL;
> > +
>
> I think now you have to check here that the queue is stopped.
> Otherwise you might attempt to reconfigure running queue.
I'm not sure if it's necessary to let application use different API sequence for a deferred configure and deferred re-configure.
Can we just call dev_ops->rx_queue_stop before rx_queue_release here
>
>
> > rxq = dev->data->rx_queues;
> > if (rxq[rx_queue_id]) {
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> > -ENOTSUP);
>
> I don't think it is *that* straightforward.
> rx_queue_setup() parameters can imply different rx function (and related dev
> icesettings) that are already setuped by previous queue_setup()/dev_start.
> So I think you need to do one of 2 things:
> 1. rework ethdev layer to introduce a separate rx function (and related
> settings) for each queue.
> 2. at rx_queue_setup() if it is invoked after dev_start - check that given
> queue settings wouldn't contradict with current device settings (rx function,
> etc.).
> If they do - return an error.
Yes, I think what we have is option 2 here, the dev_ops->rx_queue_setup will return fail if conflict with previous setting
I'm also thinking about option 1, the idea is to move per queue rx/tx function into driver layer, so it will not break existing API.
1. driver can expose the capability like per_queue_rx or per_queue_tx
2. application can enable this capability by dev_config with rte_eth_conf
3, if per_queue_rx is not enable, nothing change, so we are at option 2
4. if per_queue_rx is enabled, driver will set rx_pkt_burst with a hook function which redirect to an function ptr in a per queue rx function tables ( I guess performance is impacted somehow, but this is the cost if you want different offload for different queue)
>
> From my perspective - 1) is a better choice though it required more work,
> and possibly ABI breakage.
> I did some work in that direction as RFC:
> http://dpdk.org/dev/patchwork/patch/31866/
I will learn this, thanks for the heads up.
>
> 2) might be also possible, but looks a bit clumsy as rx_queue_setup() might
> now fail even with valid parameters - all depends on previous queue
> configurations.
>
> Same story applies for TX.
>
>
> > + if (dev->data->dev_started &&
> > + !(dev_info.deferred_queue_config_capa &
> > + DEV_DEFERRED_RX_QUEUE_RELEASE))
> > + return -EINVAL;
> > (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> > rxq[rx_queue_id] = NULL;
> > }
> > @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> uint16_t tx_queue_id,
> > return -EINVAL;
> > }
> >
> > - if (dev->data->dev_started) {
> > - RTE_PMD_DEBUG_TRACE(
> > - "port %d must be stopped to allow configuration\n", port_id);
> > - return -EBUSY;
> > - }
> > -
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> -ENOTSUP);
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup,
> -ENOTSUP);
> >
> > @@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> uint16_t tx_queue_id,
> > return -EINVAL;
> > }
> >
> > + if (dev->data->dev_started &&
> > + !(dev_info.deferred_queue_config_capa &
> > + DEV_DEFERRED_TX_QUEUE_SETUP))
> > + return -EINVAL;
> > +
> > txq = dev->data->tx_queues;
> > if (txq[tx_queue_id]) {
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> > -ENOTSUP);
> > + if (dev->data->dev_started &&
> > + !(dev_info.deferred_queue_config_capa &
> > + DEV_DEFERRED_TX_QUEUE_RELEASE))
> > + return -EINVAL;
> > (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
> > txq[tx_queue_id] = NULL;
> > }
> > diff --git a/lib/librte_ether/rte_ethdev.h
> > b/lib/librte_ether/rte_ethdev.h index 036153306..410e58c50 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -981,6 +981,15 @@ struct rte_eth_conf {
> > */
> > #define DEV_TX_OFFLOAD_SECURITY 0x00020000
> >
> > +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001 /**< Deferred
> setup rx
> > +queue */ #define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002 /**<
> Deferred
> > +setup tx queue */ #define DEV_DEFERRED_RX_QUEUE_RELEASE
> 0x00000004
> > +/**< Deferred release rx queue */ #define
> > +DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008 /**< Deferred release
> tx
> > +queue */
> > +
>
> I don't think we do need flags for both setup a and release.
> If runtime setup is supported - surely dynamic release should be supported
> too.
> Also probably RUNTIME_RX_QUEUE_SETUP sounds a bit better.
Agree
Thanks
Qi
>
> Konstantin
>
> > /*
> > * If new Tx offload capabilities are defined, they also must be
> > * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> > @@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
> > /** Configured number of rx/tx queues */
> > uint16_t nb_rx_queues; /**< Number of RX queues. */
> > uint16_t nb_tx_queues; /**< Number of TX queues. */
> > + uint64_t deferred_queue_config_capa;
> > + /**< queues can be setup/release after dev_start (DEV_DEFERRED_). */
> > };
> >
> > /**
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-14 12:35 ` Ananyev, Konstantin
@ 2018-03-15 3:22 ` Zhang, Qi Z
2018-03-15 3:50 ` Zhang, Qi Z
2018-03-15 13:22 ` Ananyev, Konstantin
0 siblings, 2 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 3:22 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, March 14, 2018 8:36 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > Sent: Friday, March 2, 2018 4:13 AM
> > To: thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> Qi
> > Z <qi.z.zhang@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> > Expose the deferred queue configuration capability and enhance
> > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
> > device already started.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > drivers/net/i40e/i40e_rxtx.c | 62
> ++++++++++++++++++++++++++++++++++++++++--
> > 2 files changed, 66 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > + dev_info->deferred_queue_config_capa =
> > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > +
> > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > sizeof(uint32_t);
> > dev_info->reta_size = pf->hash_lut_size; diff --git
> > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> > 1217e5a61..e5f532cf7 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> > uint16_t len, i;
> > uint16_t reg_idx, base, bsf, tc_mapping;
> > int q_offset, use_def_burst_func = 1;
> > + int ret = 0;
> >
> > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> I40E_MAC_X722_VF) {
> > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> > rxq->dcb_tc = i;
> > }
> >
> > + if (dev->data->dev_started) {
> > + ret = i40e_rx_queue_init(rxq);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to do RX queue initialization");
> > + return ret;
> > + }
> > + if (ad->rx_vec_allowed)
>
> Better to check what rx function is installed right now.
Yes, it should be fixed, need to return fail if any conflict
>
> > + i40e_rxq_vec_setup(rxq);
> > + if (!rxq->rx_deferred_start) {
> > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
>
> I don't think it is a good idea to start/stop queue inside
> queue_setup/queue_release.
> There is special API (queue_start/queue_stop) to do this.
The idea is if dev already started, the queue is supposed to be started automatically after queue_setup.
The defered_start flag can be used if application don't want this.
But maybe it's better to call dev_ops->rx_queue_stop in etherdev layer. (same thing for queue_stop in previous patch)
Thanks
Qi
> Konstantin
>
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to start RX queue");
> > + return ret;
> > + }
> > + }
> > + }
> > +
> > return 0;
> > }
> >
> > @@ -1848,13 +1868,21 @@ void
> > i40e_dev_rx_queue_release(void *rxq)
> > {
> > struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq;
> > + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> >
> > if (!q) {
> > PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
> > return;
> > }
> >
> > - i40e_rx_queue_release_mbufs(q);
> > + if (dev->data->dev_started) {
> > + if (dev->data->rx_queue_state[q->queue_id] ==
> > + RTE_ETH_QUEUE_STATE_STARTED)
> > + i40e_dev_rx_queue_stop(dev, q->queue_id);
> > + } else {
> > + i40e_rx_queue_release_mbufs(q);
> > + }
> > +
> > rte_free(q->sw_ring);
> > rte_free(q);
> > }
> > @@ -1980,6 +2008,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > const struct rte_eth_txconf *tx_conf) {
> > struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > + struct i40e_adapter *ad =
> > + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > struct i40e_vsi *vsi;
> > struct i40e_pf *pf = NULL;
> > struct i40e_vf *vf = NULL;
> > @@ -1989,6 +2019,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > uint16_t tx_rs_thresh, tx_free_thresh;
> > uint16_t reg_idx, i, base, bsf, tc_mapping;
> > int q_offset;
> > + int ret = 0;
> >
> > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> I40E_MAC_X722_VF) {
> > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > @@ -2162,6 +2193,25 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > txq->dcb_tc = i;
> > }
> >
> > + if (dev->data->dev_started) {
> > + ret = i40e_tx_queue_init(txq);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to do TX queue initialization");
> > + return ret;
> > + }
> > + if (ad->tx_vec_allowed)
> > + i40e_txq_vec_setup(txq);
> > + if (!txq->tx_deferred_start) {
> > + ret = i40e_dev_tx_queue_start(dev, queue_idx);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to start TX queue");
> > + return ret;
> > + }
> > + }
> > + }
> > +
> > return 0;
> > }
> >
> > @@ -2169,13 +2219,21 @@ void
> > i40e_dev_tx_queue_release(void *txq)
> > {
> > struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
> > + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> >
> > if (!q) {
> > PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
> > return;
> > }
> >
> > - i40e_tx_queue_release_mbufs(q);
> > + if (dev->data->dev_started) {
> > + if (dev->data->tx_queue_state[q->queue_id] ==
> > + RTE_ETH_QUEUE_STATE_STARTED)
> > + i40e_dev_tx_queue_stop(dev, q->queue_id);
> > + } else {
> > + i40e_tx_queue_release_mbufs(q);
> > + }
> > +
> > rte_free(q->sw_ring);
> > rte_free(q);
> > }
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-15 3:22 ` Zhang, Qi Z
@ 2018-03-15 3:50 ` Zhang, Qi Z
2018-03-15 13:22 ` Ananyev, Konstantin
1 sibling, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 3:50 UTC (permalink / raw)
To: Zhang, Qi Z, Ananyev, Konstantin, thomas
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhang, Qi Z
> Sent: Thursday, March 15, 2018 11:22 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Wednesday, March 14, 2018 8:36 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> Qi
> > Z <qi.z.zhang@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > Sent: Friday, March 2, 2018 4:13 AM
> > > To: thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > Qi
> > > Z <qi.z.zhang@intel.com>
> > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > setup
> > >
> > > Expose the deferred queue configuration capability and enhance
> > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
> > > device already started.
> > >
> > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > ---
> > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > drivers/net/i40e/i40e_rxtx.c | 62
> > ++++++++++++++++++++++++++++++++++++++++--
> > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a 100644
> > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> > struct rte_eth_dev_info *dev_info)
> > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > + dev_info->deferred_queue_config_capa =
> > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > +
> > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > > sizeof(uint32_t);
> > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> > > 1217e5a61..e5f532cf7 100644
> > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> > *dev,
> > > uint16_t len, i;
> > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > int q_offset, use_def_burst_func = 1;
> > > + int ret = 0;
> > >
> > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > I40E_MAC_X722_VF) {
> > > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> rte_eth_dev
> > *dev,
> > > rxq->dcb_tc = i;
> > > }
> > >
> > > + if (dev->data->dev_started) {
> > > + ret = i40e_rx_queue_init(rxq);
> > > + if (ret != I40E_SUCCESS) {
> > > + PMD_DRV_LOG(ERR,
> > > + "Failed to do RX queue initialization");
> > > + return ret;
> > > + }
> > > + if (ad->rx_vec_allowed)
> >
> > Better to check what rx function is installed right now.
> Yes, it should be fixed, need to return fail if any conflict
Sorry, For i40e, I think rx function set is not impacted by queue_setup parameters but only dev_configure's, so it's not necessary to check installed function, and there will be no conflict here.
> >
> > > + i40e_rxq_vec_setup(rxq);
> > > + if (!rxq->rx_deferred_start) {
> > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> >
> > I don't think it is a good idea to start/stop queue inside
> > queue_setup/queue_release.
> > There is special API (queue_start/queue_stop) to do this.
>
> The idea is if dev already started, the queue is supposed to be started
> automatically after queue_setup.
> The defered_start flag can be used if application don't want this.
> But maybe it's better to call dev_ops->rx_queue_stop in etherdev layer.
> (same thing for queue_stop in previous patch)
>
> Thanks
> Qi
>
> > Konstantin
> >
> > > + if (ret != I40E_SUCCESS) {
> > > + PMD_DRV_LOG(ERR,
> > > + "Failed to start RX queue");
> > > + return ret;
> > > + }
> > > + }
> > > + }
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -1848,13 +1868,21 @@ void
> > > i40e_dev_rx_queue_release(void *rxq) {
> > > struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq;
> > > + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> > >
> > > if (!q) {
> > > PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL");
> > > return;
> > > }
> > >
> > > - i40e_rx_queue_release_mbufs(q);
> > > + if (dev->data->dev_started) {
> > > + if (dev->data->rx_queue_state[q->queue_id] ==
> > > + RTE_ETH_QUEUE_STATE_STARTED)
> > > + i40e_dev_rx_queue_stop(dev, q->queue_id);
> > > + } else {
> > > + i40e_rx_queue_release_mbufs(q);
> > > + }
> > > +
> > > rte_free(q->sw_ring);
> > > rte_free(q);
> > > }
> > > @@ -1980,6 +2008,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> > *dev,
> > > const struct rte_eth_txconf *tx_conf) {
> > > struct i40e_hw *hw =
> > I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > > + struct i40e_adapter *ad =
> > > + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > > struct i40e_vsi *vsi;
> > > struct i40e_pf *pf = NULL;
> > > struct i40e_vf *vf = NULL;
> > > @@ -1989,6 +2019,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> > *dev,
> > > uint16_t tx_rs_thresh, tx_free_thresh;
> > > uint16_t reg_idx, i, base, bsf, tc_mapping;
> > > int q_offset;
> > > + int ret = 0;
> > >
> > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > I40E_MAC_X722_VF) {
> > > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > @@ -2162,6 +2193,25 @@ i40e_dev_tx_queue_setup(struct
> rte_eth_dev
> > *dev,
> > > txq->dcb_tc = i;
> > > }
> > >
> > > + if (dev->data->dev_started) {
> > > + ret = i40e_tx_queue_init(txq);
> > > + if (ret != I40E_SUCCESS) {
> > > + PMD_DRV_LOG(ERR,
> > > + "Failed to do TX queue initialization");
> > > + return ret;
> > > + }
> > > + if (ad->tx_vec_allowed)
> > > + i40e_txq_vec_setup(txq);
> > > + if (!txq->tx_deferred_start) {
> > > + ret = i40e_dev_tx_queue_start(dev, queue_idx);
> > > + if (ret != I40E_SUCCESS) {
> > > + PMD_DRV_LOG(ERR,
> > > + "Failed to start TX queue");
> > > + return ret;
> > > + }
> > > + }
> > > + }
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -2169,13 +2219,21 @@ void
> > > i40e_dev_tx_queue_release(void *txq)
> > > {
> > > struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq;
> > > + struct rte_eth_dev *dev = &rte_eth_devices[q->port_id];
> > >
> > > if (!q) {
> > > PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL");
> > > return;
> > > }
> > >
> > > - i40e_tx_queue_release_mbufs(q);
> > > + if (dev->data->dev_started) {
> > > + if (dev->data->tx_queue_state[q->queue_id] ==
> > > + RTE_ETH_QUEUE_STATE_STARTED)
> > > + i40e_dev_tx_queue_stop(dev, q->queue_id);
> > > + } else {
> > > + i40e_tx_queue_release_mbufs(q);
> > > + }
> > > +
> > > rte_free(q->sw_ring);
> > > rte_free(q);
> > > }
> > > --
> > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
2018-03-14 17:38 ` Ananyev, Konstantin
@ 2018-03-15 3:58 ` Zhang, Qi Z
2018-03-15 13:42 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 3:58 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 1:39 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> deferred queue setup
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > Sent: Friday, March 2, 2018 4:13 AM
> > To: thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> Qi
> > Z <qi.z.zhang@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> > deferred queue setup
> >
> > Add two parameters:
> > rxq-setup: set the number of RX queues be setup before device started
> > txq-setup: set the number of TX queues be setup before device started.
>
> Not sure we do need these new parameters at all - in next patch you
> introduce ability to do queue_setup from command-line.
> Plus we already have an ability to do queue_stop/queue_start from
> command-line.
> I think that would be enough for testing.
> Konstantin
Without these parameters, we can only reconfigure a queue after dev_start, but not the case that deferred configure a fresh queue.
>
>
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > app/test-pmd/parameters.c | 29
> +++++++++++++++++++++++++++++
> > app/test-pmd/testpmd.c | 8 ++++++--
> > app/test-pmd/testpmd.h | 2 ++
> > doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
> > 4 files changed, 49 insertions(+), 2 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index 97d22b860..497259ee7 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -146,8 +146,12 @@ usage(char* progname)
> > printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
> > printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
> > printf(" --rxq=N: set the number of RX queues per port to N.\n");
> > + printf(" --rxq-setup=N: set the number of RX queues be setup before"
> > + "device start to N.\n");
> > printf(" --rxd=N: set the number of descriptors in RX rings to N.\n");
> > printf(" --txq=N: set the number of TX queues per port to N.\n");
> > + printf(" --txq-setup=N: set the number of TX queues be setup before"
> > + "device start to N.\n");
> > printf(" --txd=N: set the number of descriptors in TX rings to N.\n");
> > printf(" --burst=N: set the number of packets per burst to N.\n");
> > printf(" --mbcache=N: set the cache of mbuf memory pool to N.\n");
> > @@ -596,7 +600,9 @@ launch_args_parse(int argc, char** argv)
> > { "rss-ip", 0, 0, 0 },
> > { "rss-udp", 0, 0, 0 },
> > { "rxq", 1, 0, 0 },
> > + { "rxq-setup", 1, 0, 0 },
> > { "txq", 1, 0, 0 },
> > + { "txq-setup", 1, 0, 0 },
> > { "rxd", 1, 0, 0 },
> > { "txd", 1, 0, 0 },
> > { "burst", 1, 0, 0 },
> > @@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
> > " >= 0 && <= %u\n", n,
> > get_allowed_max_nb_rxq(&pid));
> > }
> > + if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
> > + n = atoi(optarg);
> > + if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
> > + nb_rxq_setup = (queueid_t) n;
> > + else
> > + rte_exit(EXIT_FAILURE, "rxq-setup %d invalid - must
> be"
> > + " >= 0 && <= %u\n", n,
> > + get_allowed_max_nb_rxq(&pid));
> > + }
> > if (!strcmp(lgopts[opt_idx].name, "txq")) {
> > n = atoi(optarg);
> > if (n >= 0 && check_nb_txq((queueid_t)n) == 0) @@
> -942,6 +957,15
> > @@ launch_args_parse(int argc, char** argv)
> > " >= 0 && <= %u\n", n,
> > get_allowed_max_nb_txq(&pid));
> > }
> > + if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
> > + n = atoi(optarg);
> > + if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
> > + nb_txq_setup = (queueid_t) n;
> > + else
> > + rte_exit(EXIT_FAILURE, "txq-setup %d invalid - must
> be"
> > + " >= 0 && <= %u\n", n,
> > + get_allowed_max_nb_txq(&pid));
> > + }
> > if (!nb_rxq && !nb_txq) {
> > rte_exit(EXIT_FAILURE, "Either rx or tx queues should "
> > "be non-zero\n");
> > @@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
> > /* Set offload configuration from command line parameters. */
> > rx_mode.offloads = rx_offloads;
> > tx_mode.offloads = tx_offloads;
> > +
> > + if (nb_rxq_setup > nb_rxq)
> > + nb_rxq_setup = nb_rxq;
> > + if (nb_txq_setup > nb_txq)
> > + nb_txq_setup = nb_txq;
> > }
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 46dc22c94..790e7359c 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
> > */
> > queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
> > queueid_t nb_txq = 1; /**< Number of TX queues per port. */
> > +queueid_t nb_rxq_setup = MAX_QUEUE_ID; /**< Number of RX queues
> per
> > +port start when dev_start. */ queueid_t nb_txq_setup =
> MAX_QUEUE_ID;
> > +/**< Number of TX queues per port start when dev_start */
> >
> > /*
> > * Configurable number of RX/TX ring descriptors.
> > @@ -1594,7 +1598,7 @@ start_port(portid_t pid)
> > /* Apply Tx offloads configuration */
> > port->tx_conf.offloads = port->dev_conf.txmode.offloads;
> > /* setup tx queues */
> > - for (qi = 0; qi < nb_txq; qi++) {
> > + for (qi = 0; qi < nb_txq_setup; qi++) {
> > if ((numa_support) &&
> > (txring_numa[pi] != NUMA_NO_CONFIG))
> > diag = rte_eth_tx_queue_setup(pi, qi, @@ -1622,7
> +1626,7 @@
> > start_port(portid_t pid)
> > /* Apply Rx offloads configuration */
> > port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
> > /* setup rx queues */
> > - for (qi = 0; qi < nb_rxq; qi++) {
> > + for (qi = 0; qi < nb_rxq_setup; qi++) {
> > if ((numa_support) &&
> > (rxring_numa[pi] != NUMA_NO_CONFIG)) {
> > struct rte_mempool * mp =
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > 153abea05..1a423eb8c 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -373,6 +373,8 @@ extern uint64_t rss_hf;
> >
> > extern queueid_t nb_rxq;
> > extern queueid_t nb_txq;
> > +extern queueid_t nb_rxq_setup;
> > +extern queueid_t nb_txq_setup;
> >
> > extern uint16_t nb_rxd;
> > extern uint16_t nb_txd;
> > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > b/doc/guides/testpmd_app_ug/run_app.rst
> > index 1fd53958a..63dbec407 100644
> > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > @@ -354,6 +354,12 @@ The commandline options are:
> > Set the number of RX queues per port to N, where 1 <= N <= 65535.
> > The default value is 1.
> >
> > +* ``--rxq-setup=N``
> > +
> > + Set the number of RX queues will be setup before device started,
> > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > + larger than rxq, it will be set to rxq automatically.
> > +
> > * ``--rxd=N``
> >
> > Set the number of descriptors in the RX rings to N, where N > 0.
> > @@ -364,6 +370,12 @@ The commandline options are:
> > Set the number of TX queues per port to N, where 1 <= N <= 65535.
> > The default value is 1.
> >
> > +* ``--txq-setup=N``
> > +
> > + Set the number of TX queues will be setup before device started,
> > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > + larger than txq, it will be set to txq automatically.
> > +
> > * ``--txd=N``
> >
> > Set the number of descriptors in the TX rings to N, where N > 0.
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for queue setup
2018-03-14 17:41 ` Ananyev, Konstantin
@ 2018-03-15 3:59 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 3:59 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 1:41 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for
> queue setup
>
> > +}
> > +
> > +cmdline_parse_inst_t cmd_queue_setup = {
> > + .f = cmd_queue_setup_parsed,
> > + .data = NULL,
> > + .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>",
>
> It probably would be good to add an ability to specify rx/tx queue offloads by
> this command.
> Konstantin
OK.
>
>
> > + .tokens = {
> > + (void *)&cmd_queue_setup_queue,
> > + (void *)&cmd_queue_setup_setup,
> > + (void *)&cmd_queue_setup_rxtx,
> > + (void *)&cmd_queue_setup_port_id,
> > + (void *)&cmd_queue_setup_queue_idx,
> > + (void *)&cmd_queue_setup_ring_size,
> > + NULL,
> > + },
> > +};
> > +
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-15 3:13 ` Zhang, Qi Z
@ 2018-03-15 13:16 ` Ananyev, Konstantin
2018-03-15 15:08 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-15 13:16 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
Hi Qi,
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, March 15, 2018 3:14 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
>
> Hi Konstantin:
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Wednesday, March 14, 2018 8:32 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
> >
> > Hi Qi,
> >
> > >
> > > The patch let etherdev driver expose the capability flag through
> > > rte_eth_dev_info_get when it support deferred queue configuraiton,
> > > then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> > > continue to setup the queue or just return fail when device already
> > > started.
> > >
> > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > ---
> > > doc/guides/nics/features.rst | 8 ++++++++
> > > lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> > > lib/librte_ether/rte_ethdev.h | 11 +++++++++++
> > > 3 files changed, 37 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/doc/guides/nics/features.rst
> > > b/doc/guides/nics/features.rst index 1b4fb979f..36ad21a1f 100644
> > > --- a/doc/guides/nics/features.rst
> > > +++ b/doc/guides/nics/features.rst
> > > @@ -892,7 +892,15 @@ Documentation describes performance values.
> > >
> > > See ``dpdk.org/doc/perf/*``.
> > >
> > > +.. _nic_features_queue_deferred_setup_capabilities:
> > >
> > > +Queue deferred setup capabilities
> > > +---------------------------------
> > > +
> > > +Supports queue setup / release after device started.
> > > +
> > > +* **[provides] rte_eth_dev_info**:
> > >
> > ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFE
> > RRED_
> > > TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> > > ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> > > +* **[related] API**: ``rte_eth_dev_info_get()``.
> > >
> > > .. _nic_features_other:
> > >
> > > diff --git a/lib/librte_ether/rte_ethdev.c
> > > b/lib/librte_ether/rte_ethdev.c index a6ce2a5ba..6c906c4df 100644
> > > --- a/lib/librte_ether/rte_ethdev.c
> > > +++ b/lib/librte_ether/rte_ethdev.c
> > > @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> > uint16_t rx_queue_id,
> > > return -EINVAL;
> > > }
> > >
> > > - if (dev->data->dev_started) {
> > > - RTE_PMD_DEBUG_TRACE(
> > > - "port %d must be stopped to allow configuration\n", port_id);
> > > - return -EBUSY;
> > > - }
> > > -
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > -ENOTSUP);
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> > -ENOTSUP);
> > >
> > > @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> > uint16_t rx_queue_id,
> > > return -EINVAL;
> > > }
> > >
> > > + if (dev->data->dev_started &&
> > > + !(dev_info.deferred_queue_config_capa &
> > > + DEV_DEFERRED_RX_QUEUE_SETUP))
> > > + return -EINVAL;
> > > +
> >
> > I think now you have to check here that the queue is stopped.
> > Otherwise you might attempt to reconfigure running queue.
>
> I'm not sure if it's necessary to let application use different API sequence for a deferred configure and deferred re-configure.
> Can we just call dev_ops->rx_queue_stop before rx_queue_release here
I don't follow you here.
Let say now inside queue_start() we do check:
if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED)
Right now it is not possible to call queue_setup() without dev_stop() before it -
that's why we have check if (dev->data->dev_started) in queue_setup() right now.
Though with your patch it not the case anymore - user is able to call queue_setup()
without stopping the whole device.
But he still has to stop the queue.
>
> >
> >
> > > rxq = dev->data->rx_queues;
> > > if (rxq[rx_queue_id]) {
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> > > -ENOTSUP);
> >
> > I don't think it is *that* straightforward.
> > rx_queue_setup() parameters can imply different rx function (and related dev
> > icesettings) that are already setuped by previous queue_setup()/dev_start.
> > So I think you need to do one of 2 things:
> > 1. rework ethdev layer to introduce a separate rx function (and related
> > settings) for each queue.
> > 2. at rx_queue_setup() if it is invoked after dev_start - check that given
> > queue settings wouldn't contradict with current device settings (rx function,
> > etc.).
> > If they do - return an error.
> Yes, I think what we have is option 2 here, the dev_ops->rx_queue_setup will return fail if conflict with previous setting
Hmm and what makes you think that?
As I know it is not the case right now.
Let say I do:
....
rx_queue_setup(port=0,queue=0, mp=mb_size_2048);
dev_start(port=0);
...
rx_queue_setup(port=0,queue=1,mp=mb_size_1024);
If current rx function doesn't support multi-segs then second rx_queue_setup() should fail.
Though I don't think that would happen with the current implementation.
Same story for TX offloads, though it probably not that critical, as for most Intel PMDs HW TX offloads will become per port in 18.05.
As I can see you do have either of these options implemented right now - that's the problem.
> I'm also thinking about option 1, the idea is to move per queue rx/tx function into driver layer, so it will not break existing API.
>
> 1. driver can expose the capability like per_queue_rx or per_queue_tx
> 2. application can enable this capability by dev_config with rte_eth_conf
> 3, if per_queue_rx is not enable, nothing change, so we are at option 2
> 4. if per_queue_rx is enabled, driver will set rx_pkt_burst with a hook function which redirect to an function ptr in a per queue rx function
> tables ( I guess performance is impacted somehow, but this is the cost if you want different offload for different queue)
I don't think we need to overcomplicate things here.
It should be transparent to the user - user just calls queue_setup() - based on its input parameters
PMD selects a function that fits best.
Pretty much what we have right now, just possibly have an array of functions (one per queue).
>
> >
> > From my perspective - 1) is a better choice though it required more work,
> > and possibly ABI breakage.
> > I did some work in that direction as RFC:
> > http://dpdk.org/dev/patchwork/patch/31866/
>
> I will learn this, thanks for the heads up.
> >
> > 2) might be also possible, but looks a bit clumsy as rx_queue_setup() might
> > now fail even with valid parameters - all depends on previous queue
> > configurations.
> >
> > Same story applies for TX.
> >
> >
> > > + if (dev->data->dev_started &&
> > > + !(dev_info.deferred_queue_config_capa &
> > > + DEV_DEFERRED_RX_QUEUE_RELEASE))
> > > + return -EINVAL;
> > > (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> > > rxq[rx_queue_id] = NULL;
> > > }
> > > @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> > uint16_t tx_queue_id,
> > > return -EINVAL;
> > > }
> > >
> > > - if (dev->data->dev_started) {
> > > - RTE_PMD_DEBUG_TRACE(
> > > - "port %d must be stopped to allow configuration\n", port_id);
> > > - return -EBUSY;
> > > - }
> > > -
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > -ENOTSUP);
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup,
> > -ENOTSUP);
> > >
> > > @@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> > uint16_t tx_queue_id,
> > > return -EINVAL;
> > > }
> > >
> > > + if (dev->data->dev_started &&
> > > + !(dev_info.deferred_queue_config_capa &
> > > + DEV_DEFERRED_TX_QUEUE_SETUP))
> > > + return -EINVAL;
> > > +
> > > txq = dev->data->tx_queues;
> > > if (txq[tx_queue_id]) {
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> > > -ENOTSUP);
> > > + if (dev->data->dev_started &&
> > > + !(dev_info.deferred_queue_config_capa &
> > > + DEV_DEFERRED_TX_QUEUE_RELEASE))
> > > + return -EINVAL;
> > > (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
> > > txq[tx_queue_id] = NULL;
> > > }
> > > diff --git a/lib/librte_ether/rte_ethdev.h
> > > b/lib/librte_ether/rte_ethdev.h index 036153306..410e58c50 100644
> > > --- a/lib/librte_ether/rte_ethdev.h
> > > +++ b/lib/librte_ether/rte_ethdev.h
> > > @@ -981,6 +981,15 @@ struct rte_eth_conf {
> > > */
> > > #define DEV_TX_OFFLOAD_SECURITY 0x00020000
> > >
> > > +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001 /**< Deferred
> > setup rx
> > > +queue */ #define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002 /**<
> > Deferred
> > > +setup tx queue */ #define DEV_DEFERRED_RX_QUEUE_RELEASE
> > 0x00000004
> > > +/**< Deferred release rx queue */ #define
> > > +DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008 /**< Deferred release
> > tx
> > > +queue */
> > > +
> >
> > I don't think we do need flags for both setup a and release.
> > If runtime setup is supported - surely dynamic release should be supported
> > too.
> > Also probably RUNTIME_RX_QUEUE_SETUP sounds a bit better.
>
> Agree
>
> Thanks
> Qi
>
> >
> > Konstantin
> >
> > > /*
> > > * If new Tx offload capabilities are defined, they also must be
> > > * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> > > @@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
> > > /** Configured number of rx/tx queues */
> > > uint16_t nb_rx_queues; /**< Number of RX queues. */
> > > uint16_t nb_tx_queues; /**< Number of TX queues. */
> > > + uint64_t deferred_queue_config_capa;
> > > + /**< queues can be setup/release after dev_start (DEV_DEFERRED_). */
> > > };
> > >
> > > /**
> > > --
> > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-15 3:22 ` Zhang, Qi Z
2018-03-15 3:50 ` Zhang, Qi Z
@ 2018-03-15 13:22 ` Ananyev, Konstantin
2018-03-15 14:30 ` Zhang, Qi Z
1 sibling, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-15 13:22 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, March 15, 2018 3:22 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Wednesday, March 14, 2018 8:36 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > Sent: Friday, March 2, 2018 4:13 AM
> > > To: thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > Qi
> > > Z <qi.z.zhang@intel.com>
> > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > setup
> > >
> > > Expose the deferred queue configuration capability and enhance
> > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation when
> > > device already started.
> > >
> > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > ---
> > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > drivers/net/i40e/i40e_rxtx.c | 62
> > ++++++++++++++++++++++++++++++++++++++++--
> > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a 100644
> > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> > struct rte_eth_dev_info *dev_info)
> > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > + dev_info->deferred_queue_config_capa =
> > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > +
> > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > > sizeof(uint32_t);
> > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> > > 1217e5a61..e5f532cf7 100644
> > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> > *dev,
> > > uint16_t len, i;
> > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > int q_offset, use_def_burst_func = 1;
> > > + int ret = 0;
> > >
> > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > I40E_MAC_X722_VF) {
> > > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> > *dev,
> > > rxq->dcb_tc = i;
> > > }
> > >
> > > + if (dev->data->dev_started) {
> > > + ret = i40e_rx_queue_init(rxq);
> > > + if (ret != I40E_SUCCESS) {
> > > + PMD_DRV_LOG(ERR,
> > > + "Failed to do RX queue initialization");
> > > + return ret;
> > > + }
> > > + if (ad->rx_vec_allowed)
> >
> > Better to check what rx function is installed right now.
> Yes, it should be fixed, need to return fail if any conflict
> >
> > > + i40e_rxq_vec_setup(rxq);
> > > + if (!rxq->rx_deferred_start) {
> > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> >
> > I don't think it is a good idea to start/stop queue inside
> > queue_setup/queue_release.
> > There is special API (queue_start/queue_stop) to do this.
>
> The idea is if dev already started, the queue is supposed to be started automatically after queue_setup.
Why is that?
Might be user doesn't want to start queue, might be he only wants to start it.
Might be he would need to call queue_setup() once again later before starting it - based on some logic?
If the user wants to setup and start the queue immediately he can always do:
rc = queue_setup(...);
if (rc == 0)
queue_start(...);
We have a pretty well defined API here let's keep it like that.
Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
2018-03-15 3:58 ` Zhang, Qi Z
@ 2018-03-15 13:42 ` Ananyev, Konstantin
2018-03-15 14:31 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-15 13:42 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, March 15, 2018 3:58 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, March 15, 2018 1:39 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> > deferred queue setup
> >
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > Sent: Friday, March 2, 2018 4:13 AM
> > > To: thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > Qi
> > > Z <qi.z.zhang@intel.com>
> > > Subject: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> > > deferred queue setup
> > >
> > > Add two parameters:
> > > rxq-setup: set the number of RX queues be setup before device started
> > > txq-setup: set the number of TX queues be setup before device started.
> >
> > Not sure we do need these new parameters at all - in next patch you
> > introduce ability to do queue_setup from command-line.
> > Plus we already have an ability to do queue_stop/queue_start from
> > command-line.
> > I think that would be enough for testing.
> > Konstantin
>
> Without these parameters, we can only reconfigure a queue after dev_start, but not the case that deferred configure a fresh queue.
We do have:
port stop
port config all (rxq|txq) (value)
port start
If we'll add a new command to specify which queues should be deferred at dev_start -
wouldn't that be enough?
>
> >
> >
> > >
> > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > ---
> > > app/test-pmd/parameters.c | 29
> > +++++++++++++++++++++++++++++
> > > app/test-pmd/testpmd.c | 8 ++++++--
> > > app/test-pmd/testpmd.h | 2 ++
> > > doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
> > > 4 files changed, 49 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > > index 97d22b860..497259ee7 100644
> > > --- a/app/test-pmd/parameters.c
> > > +++ b/app/test-pmd/parameters.c
> > > @@ -146,8 +146,12 @@ usage(char* progname)
> > > printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
> > > printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
> > > printf(" --rxq=N: set the number of RX queues per port to N.\n");
> > > + printf(" --rxq-setup=N: set the number of RX queues be setup before"
> > > + "device start to N.\n");
> > > printf(" --rxd=N: set the number of descriptors in RX rings to N.\n");
> > > printf(" --txq=N: set the number of TX queues per port to N.\n");
> > > + printf(" --txq-setup=N: set the number of TX queues be setup before"
> > > + "device start to N.\n");
> > > printf(" --txd=N: set the number of descriptors in TX rings to N.\n");
> > > printf(" --burst=N: set the number of packets per burst to N.\n");
> > > printf(" --mbcache=N: set the cache of mbuf memory pool to N.\n");
> > > @@ -596,7 +600,9 @@ launch_args_parse(int argc, char** argv)
> > > { "rss-ip", 0, 0, 0 },
> > > { "rss-udp", 0, 0, 0 },
> > > { "rxq", 1, 0, 0 },
> > > + { "rxq-setup", 1, 0, 0 },
> > > { "txq", 1, 0, 0 },
> > > + { "txq-setup", 1, 0, 0 },
> > > { "rxd", 1, 0, 0 },
> > > { "txd", 1, 0, 0 },
> > > { "burst", 1, 0, 0 },
> > > @@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
> > > " >= 0 && <= %u\n", n,
> > > get_allowed_max_nb_rxq(&pid));
> > > }
> > > + if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
> > > + n = atoi(optarg);
> > > + if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
> > > + nb_rxq_setup = (queueid_t) n;
> > > + else
> > > + rte_exit(EXIT_FAILURE, "rxq-setup %d invalid - must
> > be"
> > > + " >= 0 && <= %u\n", n,
> > > + get_allowed_max_nb_rxq(&pid));
> > > + }
> > > if (!strcmp(lgopts[opt_idx].name, "txq")) {
> > > n = atoi(optarg);
> > > if (n >= 0 && check_nb_txq((queueid_t)n) == 0) @@
> > -942,6 +957,15
> > > @@ launch_args_parse(int argc, char** argv)
> > > " >= 0 && <= %u\n", n,
> > > get_allowed_max_nb_txq(&pid));
> > > }
> > > + if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
> > > + n = atoi(optarg);
> > > + if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
> > > + nb_txq_setup = (queueid_t) n;
> > > + else
> > > + rte_exit(EXIT_FAILURE, "txq-setup %d invalid - must
> > be"
> > > + " >= 0 && <= %u\n", n,
> > > + get_allowed_max_nb_txq(&pid));
> > > + }
> > > if (!nb_rxq && !nb_txq) {
> > > rte_exit(EXIT_FAILURE, "Either rx or tx queues should "
> > > "be non-zero\n");
> > > @@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
> > > /* Set offload configuration from command line parameters. */
> > > rx_mode.offloads = rx_offloads;
> > > tx_mode.offloads = tx_offloads;
> > > +
> > > + if (nb_rxq_setup > nb_rxq)
> > > + nb_rxq_setup = nb_rxq;
> > > + if (nb_txq_setup > nb_txq)
> > > + nb_txq_setup = nb_txq;
> > > }
> > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > > 46dc22c94..790e7359c 100644
> > > --- a/app/test-pmd/testpmd.c
> > > +++ b/app/test-pmd/testpmd.c
> > > @@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
> > > */
> > > queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
> > > queueid_t nb_txq = 1; /**< Number of TX queues per port. */
> > > +queueid_t nb_rxq_setup = MAX_QUEUE_ID; /**< Number of RX queues
> > per
> > > +port start when dev_start. */ queueid_t nb_txq_setup =
> > MAX_QUEUE_ID;
> > > +/**< Number of TX queues per port start when dev_start */
> > >
> > > /*
> > > * Configurable number of RX/TX ring descriptors.
> > > @@ -1594,7 +1598,7 @@ start_port(portid_t pid)
> > > /* Apply Tx offloads configuration */
> > > port->tx_conf.offloads = port->dev_conf.txmode.offloads;
> > > /* setup tx queues */
> > > - for (qi = 0; qi < nb_txq; qi++) {
> > > + for (qi = 0; qi < nb_txq_setup; qi++) {
> > > if ((numa_support) &&
> > > (txring_numa[pi] != NUMA_NO_CONFIG))
> > > diag = rte_eth_tx_queue_setup(pi, qi, @@ -1622,7
> > +1626,7 @@
> > > start_port(portid_t pid)
> > > /* Apply Rx offloads configuration */
> > > port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
> > > /* setup rx queues */
> > > - for (qi = 0; qi < nb_rxq; qi++) {
> > > + for (qi = 0; qi < nb_rxq_setup; qi++) {
> > > if ((numa_support) &&
> > > (rxring_numa[pi] != NUMA_NO_CONFIG)) {
> > > struct rte_mempool * mp =
> > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > > 153abea05..1a423eb8c 100644
> > > --- a/app/test-pmd/testpmd.h
> > > +++ b/app/test-pmd/testpmd.h
> > > @@ -373,6 +373,8 @@ extern uint64_t rss_hf;
> > >
> > > extern queueid_t nb_rxq;
> > > extern queueid_t nb_txq;
> > > +extern queueid_t nb_rxq_setup;
> > > +extern queueid_t nb_txq_setup;
> > >
> > > extern uint16_t nb_rxd;
> > > extern uint16_t nb_txd;
> > > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > > b/doc/guides/testpmd_app_ug/run_app.rst
> > > index 1fd53958a..63dbec407 100644
> > > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > > @@ -354,6 +354,12 @@ The commandline options are:
> > > Set the number of RX queues per port to N, where 1 <= N <= 65535.
> > > The default value is 1.
> > >
> > > +* ``--rxq-setup=N``
> > > +
> > > + Set the number of RX queues will be setup before device started,
> > > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > > + larger than rxq, it will be set to rxq automatically.
> > > +
> > > * ``--rxd=N``
> > >
> > > Set the number of descriptors in the RX rings to N, where N > 0.
> > > @@ -364,6 +370,12 @@ The commandline options are:
> > > Set the number of TX queues per port to N, where 1 <= N <= 65535.
> > > The default value is 1.
> > >
> > > +* ``--txq-setup=N``
> > > +
> > > + Set the number of TX queues will be setup before device started,
> > > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > > + larger than txq, it will be set to txq automatically.
> > > +
> > > * ``--txd=N``
> > >
> > > Set the number of descriptors in the TX rings to N, where N > 0.
> > > --
> > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-15 13:22 ` Ananyev, Konstantin
@ 2018-03-15 14:30 ` Zhang, Qi Z
2018-03-15 15:22 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 14:30 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 9:23 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, March 15, 2018 3:22 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > > Qi Z <qi.z.zhang@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > queue setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > To: thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>; Zhang,
> > > Qi
> > > > Z <qi.z.zhang@intel.com>
> > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > > setup
> > > >
> > > > Expose the deferred queue configuration capability and enhance
> > > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation
> > > > when device already started.
> > > >
> > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > ---
> > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > ++++++++++++++++++++++++++++++++++++++++--
> > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a 100644
> > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev
> *dev,
> > > struct rte_eth_dev_info *dev_info)
> > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > + dev_info->deferred_queue_config_capa =
> > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > +
> > > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > > > sizeof(uint32_t);
> > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > > > index
> > > > 1217e5a61..e5f532cf7 100644
> > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> rte_eth_dev
> > > *dev,
> > > > uint16_t len, i;
> > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > int q_offset, use_def_burst_func = 1;
> > > > + int ret = 0;
> > > >
> > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > I40E_MAC_X722_VF) {
> > > > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> rte_eth_dev
> > > *dev,
> > > > rxq->dcb_tc = i;
> > > > }
> > > >
> > > > + if (dev->data->dev_started) {
> > > > + ret = i40e_rx_queue_init(rxq);
> > > > + if (ret != I40E_SUCCESS) {
> > > > + PMD_DRV_LOG(ERR,
> > > > + "Failed to do RX queue initialization");
> > > > + return ret;
> > > > + }
> > > > + if (ad->rx_vec_allowed)
> > >
> > > Better to check what rx function is installed right now.
> > Yes, it should be fixed, need to return fail if any conflict
> > >
> > > > + i40e_rxq_vec_setup(rxq);
> > > > + if (!rxq->rx_deferred_start) {
> > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > >
> > > I don't think it is a good idea to start/stop queue inside
> > > queue_setup/queue_release.
> > > There is special API (queue_start/queue_stop) to do this.
> >
> > The idea is if dev already started, the queue is supposed to be started
> automatically after queue_setup.
>
> Why is that?
Because device is already started, its like a running conveyor belt, anything you put or replace on it just moves automatically.
> Might be user doesn't want to start queue, might be he only wants to start
> it.
Use deferred_start_flag,
> Might be he would need to call queue_setup() once again later before
> starting it - based on some logic?
Dev_ops->queue_stop will be called first before dev_ops->queue_setup in rte_eth_rx|tx_queue_setup, if a queue is running.
> If the user wants to setup and start the queue immediately he can always do:
>
> rc = queue_setup(...);
> if (rc == 0)
> queue_start(...);
application no need to call queue_start explicitly in this case.
>
> We have a pretty well defined API here let's keep it like that.
> Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for deferred queue setup
2018-03-15 13:42 ` Ananyev, Konstantin
@ 2018-03-15 14:31 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 14:31 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 9:42 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, March 15, 2018 3:58 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> > deferred queue setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, March 15, 2018 1:39 AM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > > Qi Z <qi.z.zhang@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters
> > > for deferred queue setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > To: thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>; Zhang,
> > > Qi
> > > > Z <qi.z.zhang@intel.com>
> > > > Subject: [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for
> > > > deferred queue setup
> > > >
> > > > Add two parameters:
> > > > rxq-setup: set the number of RX queues be setup before device
> > > > started
> > > > txq-setup: set the number of TX queues be setup before device started.
> > >
> > > Not sure we do need these new parameters at all - in next patch you
> > > introduce ability to do queue_setup from command-line.
> > > Plus we already have an ability to do queue_stop/queue_start from
> > > command-line.
> > > I think that would be enough for testing.
> > > Konstantin
> >
> > Without these parameters, we can only reconfigure a queue after dev_start,
> but not the case that deferred configure a fresh queue.
>
> We do have:
> port stop
> port config all (rxq|txq) (value)
> port start
>
> If we'll add a new command to specify which queues should be deferred at
> dev_start - wouldn't that be enough?
>
OK, that's what I want. thanks.
>
> >
> > >
> > >
> > > >
> > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > ---
> > > > app/test-pmd/parameters.c | 29
> > > +++++++++++++++++++++++++++++
> > > > app/test-pmd/testpmd.c | 8 ++++++--
> > > > app/test-pmd/testpmd.h | 2 ++
> > > > doc/guides/testpmd_app_ug/run_app.rst | 12 ++++++++++++
> > > > 4 files changed, 49 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > > > index 97d22b860..497259ee7 100644
> > > > --- a/app/test-pmd/parameters.c
> > > > +++ b/app/test-pmd/parameters.c
> > > > @@ -146,8 +146,12 @@ usage(char* progname)
> > > > printf(" --rss-ip: set RSS functions to IPv4/IPv6 only .\n");
> > > > printf(" --rss-udp: set RSS functions to IPv4/IPv6 + UDP.\n");
> > > > printf(" --rxq=N: set the number of RX queues per port to
> > > > N.\n");
> > > > + printf(" --rxq-setup=N: set the number of RX queues be setup
> before"
> > > > + "device start to N.\n");
> > > > printf(" --rxd=N: set the number of descriptors in RX rings to
> N.\n");
> > > > printf(" --txq=N: set the number of TX queues per port to
> > > > N.\n");
> > > > + printf(" --txq-setup=N: set the number of TX queues be setup
> before"
> > > > + "device start to N.\n");
> > > > printf(" --txd=N: set the number of descriptors in TX rings to
> N.\n");
> > > > printf(" --burst=N: set the number of packets per burst to N.\n");
> > > > printf(" --mbcache=N: set the cache of mbuf memory pool to
> > > > N.\n"); @@ -596,7 +600,9 @@ launch_args_parse(int argc, char**
> argv)
> > > > { "rss-ip", 0, 0, 0 },
> > > > { "rss-udp", 0, 0, 0 },
> > > > { "rxq", 1, 0, 0 },
> > > > + { "rxq-setup", 1, 0, 0 },
> > > > { "txq", 1, 0, 0 },
> > > > + { "txq-setup", 1, 0, 0 },
> > > > { "rxd", 1, 0, 0 },
> > > > { "txd", 1, 0, 0 },
> > > > { "burst", 1, 0, 0 },
> > > > @@ -933,6 +939,15 @@ launch_args_parse(int argc, char** argv)
> > > > " >= 0 && <= %u\n", n,
> > > > get_allowed_max_nb_rxq(&pid));
> > > > }
> > > > + if (!strcmp(lgopts[opt_idx].name, "rxq-setup")) {
> > > > + n = atoi(optarg);
> > > > + if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
> > > > + nb_rxq_setup = (queueid_t) n;
> > > > + else
> > > > + rte_exit(EXIT_FAILURE, "rxq-setup %d invalid -
> must
> > > be"
> > > > + " >= 0 && <= %u\n", n,
> > > > + get_allowed_max_nb_rxq(&pid));
> > > > + }
> > > > if (!strcmp(lgopts[opt_idx].name, "txq")) {
> > > > n = atoi(optarg);
> > > > if (n >= 0 && check_nb_txq((queueid_t)n) == 0) @@
> > > -942,6 +957,15
> > > > @@ launch_args_parse(int argc, char** argv)
> > > > " >= 0 && <= %u\n", n,
> > > > get_allowed_max_nb_txq(&pid));
> > > > }
> > > > + if (!strcmp(lgopts[opt_idx].name, "txq-setup")) {
> > > > + n = atoi(optarg);
> > > > + if (n >= 0 && check_nb_txq((queueid_t)n) == 0)
> > > > + nb_txq_setup = (queueid_t) n;
> > > > + else
> > > > + rte_exit(EXIT_FAILURE, "txq-setup %d invalid -
> must
> > > be"
> > > > + " >= 0 && <= %u\n", n,
> > > > + get_allowed_max_nb_txq(&pid));
> > > > + }
> > > > if (!nb_rxq && !nb_txq) {
> > > > rte_exit(EXIT_FAILURE, "Either rx or tx queues should
> "
> > > > "be non-zero\n");
> > > > @@ -1119,4 +1143,9 @@ launch_args_parse(int argc, char** argv)
> > > > /* Set offload configuration from command line parameters. */
> > > > rx_mode.offloads = rx_offloads;
> > > > tx_mode.offloads = tx_offloads;
> > > > +
> > > > + if (nb_rxq_setup > nb_rxq)
> > > > + nb_rxq_setup = nb_rxq;
> > > > + if (nb_txq_setup > nb_txq)
> > > > + nb_txq_setup = nb_txq;
> > > > }
> > > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > > > 46dc22c94..790e7359c 100644
> > > > --- a/app/test-pmd/testpmd.c
> > > > +++ b/app/test-pmd/testpmd.c
> > > > @@ -207,6 +207,10 @@ uint8_t dcb_test = 0;
> > > > */
> > > > queueid_t nb_rxq = 1; /**< Number of RX queues per port. */
> > > > queueid_t nb_txq = 1; /**< Number of TX queues per port. */
> > > > +queueid_t nb_rxq_setup = MAX_QUEUE_ID; /**< Number of RX
> queues
> > > per
> > > > +port start when dev_start. */ queueid_t nb_txq_setup =
> > > MAX_QUEUE_ID;
> > > > +/**< Number of TX queues per port start when dev_start */
> > > >
> > > > /*
> > > > * Configurable number of RX/TX ring descriptors.
> > > > @@ -1594,7 +1598,7 @@ start_port(portid_t pid)
> > > > /* Apply Tx offloads configuration */
> > > > port->tx_conf.offloads = port->dev_conf.txmode.offloads;
> > > > /* setup tx queues */
> > > > - for (qi = 0; qi < nb_txq; qi++) {
> > > > + for (qi = 0; qi < nb_txq_setup; qi++) {
> > > > if ((numa_support) &&
> > > > (txring_numa[pi] != NUMA_NO_CONFIG))
> > > > diag = rte_eth_tx_queue_setup(pi, qi, @@
> -1622,7
> > > +1626,7 @@
> > > > start_port(portid_t pid)
> > > > /* Apply Rx offloads configuration */
> > > > port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
> > > > /* setup rx queues */
> > > > - for (qi = 0; qi < nb_rxq; qi++) {
> > > > + for (qi = 0; qi < nb_rxq_setup; qi++) {
> > > > if ((numa_support) &&
> > > > (rxring_numa[pi] != NUMA_NO_CONFIG)) {
> > > > struct rte_mempool * mp =
> > > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > > > 153abea05..1a423eb8c 100644
> > > > --- a/app/test-pmd/testpmd.h
> > > > +++ b/app/test-pmd/testpmd.h
> > > > @@ -373,6 +373,8 @@ extern uint64_t rss_hf;
> > > >
> > > > extern queueid_t nb_rxq;
> > > > extern queueid_t nb_txq;
> > > > +extern queueid_t nb_rxq_setup;
> > > > +extern queueid_t nb_txq_setup;
> > > >
> > > > extern uint16_t nb_rxd;
> > > > extern uint16_t nb_txd;
> > > > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > > > b/doc/guides/testpmd_app_ug/run_app.rst
> > > > index 1fd53958a..63dbec407 100644
> > > > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > > > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > > > @@ -354,6 +354,12 @@ The commandline options are:
> > > > Set the number of RX queues per port to N, where 1 <= N <=
> 65535.
> > > > The default value is 1.
> > > >
> > > > +* ``--rxq-setup=N``
> > > > +
> > > > + Set the number of RX queues will be setup before device started,
> > > > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > > > + larger than rxq, it will be set to rxq automatically.
> > > > +
> > > > * ``--rxd=N``
> > > >
> > > > Set the number of descriptors in the RX rings to N, where N > 0.
> > > > @@ -364,6 +370,12 @@ The commandline options are:
> > > > Set the number of TX queues per port to N, where 1 <= N <=
> 65535.
> > > > The default value is 1.
> > > >
> > > > +* ``--txq-setup=N``
> > > > +
> > > > + Set the number of TX queues will be setup before device started,
> > > > + where 0 <= N <= 65535. The default value is rxq, if the number is
> > > > + larger than txq, it will be set to txq automatically.
> > > > +
> > > > * ``--txd=N``
> > > >
> > > > Set the number of descriptors in the TX rings to N, where N > 0.
> > > > --
> > > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-15 13:16 ` Ananyev, Konstantin
@ 2018-03-15 15:08 ` Zhang, Qi Z
2018-03-15 15:38 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-15 15:08 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 9:17 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
>
> Hi Qi,
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, March 15, 2018 3:14 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > setup
> >
> > Hi Konstantin:
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Wednesday, March 14, 2018 8:32 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > > Qi Z <qi.z.zhang@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > > setup
> > >
> > > Hi Qi,
> > >
> > > >
> > > > The patch let etherdev driver expose the capability flag through
> > > > rte_eth_dev_info_get when it support deferred queue configuraiton,
> > > > then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> > > > continue to setup the queue or just return fail when device
> > > > already started.
> > > >
> > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > ---
> > > > doc/guides/nics/features.rst | 8 ++++++++
> > > > lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> > > > lib/librte_ether/rte_ethdev.h | 11 +++++++++++
> > > > 3 files changed, 37 insertions(+), 12 deletions(-)
> > > >
> > > > diff --git a/doc/guides/nics/features.rst
> > > > b/doc/guides/nics/features.rst index 1b4fb979f..36ad21a1f 100644
> > > > --- a/doc/guides/nics/features.rst
> > > > +++ b/doc/guides/nics/features.rst
> > > > @@ -892,7 +892,15 @@ Documentation describes performance
> values.
> > > >
> > > > See ``dpdk.org/doc/perf/*``.
> > > >
> > > > +.. _nic_features_queue_deferred_setup_capabilities:
> > > >
> > > > +Queue deferred setup capabilities
> > > > +---------------------------------
> > > > +
> > > > +Supports queue setup / release after device started.
> > > > +
> > > > +* **[provides] rte_eth_dev_info**:
> > > >
> > >
> ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFE
> > > RRED_
> > > > TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> > > > ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> > > > +* **[related] API**: ``rte_eth_dev_info_get()``.
> > > >
> > > > .. _nic_features_other:
> > > >
> > > > diff --git a/lib/librte_ether/rte_ethdev.c
> > > > b/lib/librte_ether/rte_ethdev.c index a6ce2a5ba..6c906c4df 100644
> > > > --- a/lib/librte_ether/rte_ethdev.c
> > > > +++ b/lib/librte_ether/rte_ethdev.c
> > > > @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> > > uint16_t rx_queue_id,
> > > > return -EINVAL;
> > > > }
> > > >
> > > > - if (dev->data->dev_started) {
> > > > - RTE_PMD_DEBUG_TRACE(
> > > > - "port %d must be stopped to allow configuration\n",
> port_id);
> > > > - return -EBUSY;
> > > > - }
> > > > -
> > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > > -ENOTSUP);
> > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> > > -ENOTSUP);
> > > >
> > > > @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t
> port_id,
> > > uint16_t rx_queue_id,
> > > > return -EINVAL;
> > > > }
> > > >
> > > > + if (dev->data->dev_started &&
> > > > + !(dev_info.deferred_queue_config_capa &
> > > > + DEV_DEFERRED_RX_QUEUE_SETUP))
> > > > + return -EINVAL;
> > > > +
> > >
> > > I think now you have to check here that the queue is stopped.
> > > Otherwise you might attempt to reconfigure running queue.
> >
> > I'm not sure if it's necessary to let application use different API sequence
> for a deferred configure and deferred re-configure.
> > Can we just call dev_ops->rx_queue_stop before rx_queue_release here
>
> I don't follow you here.
> Let say now inside queue_start() we do check:
>
> if (dev->data->rx_queue_state[rx_queue_id] !=
> RTE_ETH_QUEUE_STATE_STOPPED)
>
> Right now it is not possible to call queue_setup() without dev_stop() before
> it - that's why we have check if (dev->data->dev_started) in queue_setup()
> right now.
> Though with your patch it not the case anymore - user is able to call
> queue_setup() without stopping the whole device.
> But he still has to stop the queue.
>
> >
> > >
> > >
> > > > rxq = dev->data->rx_queues;
> > > > if (rxq[rx_queue_id]) {
> > > >
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> > > > -ENOTSUP);
> > >
> > > I don't think it is *that* straightforward.
> > > rx_queue_setup() parameters can imply different rx function (and
> > > related dev
> > > icesettings) that are already setuped by previous
> queue_setup()/dev_start.
> > > So I think you need to do one of 2 things:
> > > 1. rework ethdev layer to introduce a separate rx function (and
> > > related
> > > settings) for each queue.
> > > 2. at rx_queue_setup() if it is invoked after dev_start - check that
> > > given queue settings wouldn't contradict with current device
> > > settings (rx function, etc.).
> > > If they do - return an error.
> > Yes, I think what we have is option 2 here, the
> > dev_ops->rx_queue_setup will return fail if conflict with previous
> > setting
>
> Hmm and what makes you think that?
> As I know it is not the case right now.
> Let say I do:
> ....
> rx_queue_setup(port=0,queue=0, mp=mb_size_2048);
> dev_start(port=0);
> ...
> rx_queue_setup(port=0,queue=1,mp=mb_size_1024);
>
> If current rx function doesn't support multi-segs then second
> rx_queue_setup() should fail.
> Though I don't think that would happen with the current implementation.
Why you think that would not happen? dev_ops->rx_queue_setup can fail, right?
I mean it's the responsibility of low level driver (i40e) to check the conflict with current implementation.
>
> Same story for TX offloads, though it probably not that critical, as for most
> Intel PMDs HW TX offloads will become per port in 18.05.
>
> As I can see you do have either of these options implemented right now -
> that's the problem.
>
> > I'm also thinking about option 1, the idea is to move per queue rx/tx
> function into driver layer, so it will not break existing API.
> >
> > 1. driver can expose the capability like per_queue_rx or per_queue_tx
> > 2. application can enable this capability by dev_config with
> > rte_eth_conf 3, if per_queue_rx is not enable, nothing change, so we
> > are at option 2 4. if per_queue_rx is enabled, driver will set
> > rx_pkt_burst with a hook function which redirect to an function ptr in
> > a per queue rx function tables ( I guess performance is impacted
> > somehow, but this is the cost if you want different offload for
> > different queue)
>
> I don't think we need to overcomplicate things here.
> It should be transparent to the user - user just calls queue_setup() - based on
> its input parameters PMD selects a function that fits best.
> Pretty much what we have right now, just possibly have an array of functions
> (one per queue).
If we don't introduce a new capability or something like, but just take per queue functions as default way,
does that mean, we need to change all drivers to adapt this?
Or do you mean below?
If (dev->rx_pkt_burst)
/* default way */
else
/* per queue function */
Regards
Qi
>
> >
> > >
> > > From my perspective - 1) is a better choice though it required more
> > > work, and possibly ABI breakage.
> > > I did some work in that direction as RFC:
> > > http://dpdk.org/dev/patchwork/patch/31866/
> >
> > I will learn this, thanks for the heads up.
> > >
> > > 2) might be also possible, but looks a bit clumsy as
> > > rx_queue_setup() might now fail even with valid parameters - all
> > > depends on previous queue configurations.
> > >
> > > Same story applies for TX.
> > >
> > >
> > > > + if (dev->data->dev_started &&
> > > > + !(dev_info.deferred_queue_config_capa &
> > > > + DEV_DEFERRED_RX_QUEUE_RELEASE))
> > > > + return -EINVAL;
> > > > (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> > > > rxq[rx_queue_id] = NULL;
> > > > }
> > > > @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> > > uint16_t tx_queue_id,
> > > > return -EINVAL;
> > > > }
> > > >
> > > > - if (dev->data->dev_started) {
> > > > - RTE_PMD_DEBUG_TRACE(
> > > > - "port %d must be stopped to allow configuration\n",
> port_id);
> > > > - return -EBUSY;
> > > > - }
> > > > -
> > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > > -ENOTSUP);
> > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup,
> > > -ENOTSUP);
> > > >
> > > > @@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t
> port_id,
> > > uint16_t tx_queue_id,
> > > > return -EINVAL;
> > > > }
> > > >
> > > > + if (dev->data->dev_started &&
> > > > + !(dev_info.deferred_queue_config_capa &
> > > > + DEV_DEFERRED_TX_QUEUE_SETUP))
> > > > + return -EINVAL;
> > > > +
> > > > txq = dev->data->tx_queues;
> > > > if (txq[tx_queue_id]) {
> > > >
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> > > > -ENOTSUP);
> > > > + if (dev->data->dev_started &&
> > > > + !(dev_info.deferred_queue_config_capa &
> > > > + DEV_DEFERRED_TX_QUEUE_RELEASE))
> > > > + return -EINVAL;
> > > > (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
> > > > txq[tx_queue_id] = NULL;
> > > > }
> > > > diff --git a/lib/librte_ether/rte_ethdev.h
> > > > b/lib/librte_ether/rte_ethdev.h index 036153306..410e58c50 100644
> > > > --- a/lib/librte_ether/rte_ethdev.h
> > > > +++ b/lib/librte_ether/rte_ethdev.h
> > > > @@ -981,6 +981,15 @@ struct rte_eth_conf {
> > > > */
> > > > #define DEV_TX_OFFLOAD_SECURITY 0x00020000
> > > >
> > > > +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001 /**<
> Deferred
> > > setup rx
> > > > +queue */ #define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
> /**<
> > > Deferred
> > > > +setup tx queue */ #define DEV_DEFERRED_RX_QUEUE_RELEASE
> > > 0x00000004
> > > > +/**< Deferred release rx queue */ #define
> > > > +DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008 /**< Deferred
> release
> > > tx
> > > > +queue */
> > > > +
> > >
> > > I don't think we do need flags for both setup a and release.
> > > If runtime setup is supported - surely dynamic release should be
> > > supported too.
> > > Also probably RUNTIME_RX_QUEUE_SETUP sounds a bit better.
> >
> > Agree
> >
> > Thanks
> > Qi
> >
> > >
> > > Konstantin
> > >
> > > > /*
> > > > * If new Tx offload capabilities are defined, they also must be
> > > > * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> > > > @@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
> > > > /** Configured number of rx/tx queues */
> > > > uint16_t nb_rx_queues; /**< Number of RX queues. */
> > > > uint16_t nb_tx_queues; /**< Number of TX queues. */
> > > > + uint64_t deferred_queue_config_capa;
> > > > + /**< queues can be setup/release after dev_start
> > > > +(DEV_DEFERRED_). */
> > > > };
> > > >
> > > > /**
> > > > --
> > > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-15 14:30 ` Zhang, Qi Z
@ 2018-03-15 15:22 ` Ananyev, Konstantin
2018-03-16 0:52 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-15 15:22 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, March 15, 2018 2:30 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, March 15, 2018 9:23 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Thursday, March 15, 2018 3:22 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > > > Qi Z <qi.z.zhang@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > queue setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > To: thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > Qi
> > > > > Z <qi.z.zhang@intel.com>
> > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > > > setup
> > > > >
> > > > > Expose the deferred queue configuration capability and enhance
> > > > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation
> > > > > when device already started.
> > > > >
> > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > ---
> > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a 100644
> > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev
> > *dev,
> > > > struct rte_eth_dev_info *dev_info)
> > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > + dev_info->deferred_queue_config_capa =
> > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > +
> > > > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > > > > sizeof(uint32_t);
> > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > > > > index
> > > > > 1217e5a61..e5f532cf7 100644
> > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > rte_eth_dev
> > > > *dev,
> > > > > uint16_t len, i;
> > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > int q_offset, use_def_burst_func = 1;
> > > > > + int ret = 0;
> > > > >
> > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > I40E_MAC_X722_VF) {
> > > > > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > rte_eth_dev
> > > > *dev,
> > > > > rxq->dcb_tc = i;
> > > > > }
> > > > >
> > > > > + if (dev->data->dev_started) {
> > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > + if (ret != I40E_SUCCESS) {
> > > > > + PMD_DRV_LOG(ERR,
> > > > > + "Failed to do RX queue initialization");
> > > > > + return ret;
> > > > > + }
> > > > > + if (ad->rx_vec_allowed)
> > > >
> > > > Better to check what rx function is installed right now.
> > > Yes, it should be fixed, need to return fail if any conflict
> > > >
> > > > > + i40e_rxq_vec_setup(rxq);
> > > > > + if (!rxq->rx_deferred_start) {
> > > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > > >
> > > > I don't think it is a good idea to start/stop queue inside
> > > > queue_setup/queue_release.
> > > > There is special API (queue_start/queue_stop) to do this.
> > >
> > > The idea is if dev already started, the queue is supposed to be started
> > automatically after queue_setup.
> >
> > Why is that?
> Because device is already started, its like a running conveyor belt, anything you put or replace on it just moves automatically.
Why is that? :)
You do break existing behavior.
Right now it possible to do:
queue_setup(); queue_setup();
for the same queue.
With you patch is not any more.
And I don't see an good reason to break existing behavior.
What is the advantage of implicit call queue_start() implicitly from the queue_setup()/?
Konstantin
>
> > Might be user doesn't want to start queue, might be he only wants to start
> > it.
> Use deferred_start_flag,
> > Might be he would need to call queue_setup() once again later before
> > starting it - based on some logic?
> Dev_ops->queue_stop will be called first before dev_ops->queue_setup in rte_eth_rx|tx_queue_setup, if a queue is running.
>
>
>
> > If the user wants to setup and start the queue immediately he can always do:
> >
> > rc = queue_setup(...);
> > if (rc == 0)
> > queue_start(...);
>
> application no need to call queue_start explicitly in this case.
>
> >
> > We have a pretty well defined API here let's keep it like that.
> > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-15 15:08 ` Zhang, Qi Z
@ 2018-03-15 15:38 ` Ananyev, Konstantin
2018-03-16 0:42 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-15 15:38 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Thursday, March 15, 2018 3:09 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, March 15, 2018 9:17 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
> >
> > Hi Qi,
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Thursday, March 15, 2018 3:14 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > > setup
> > >
> > > Hi Konstantin:
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Wednesday, March 14, 2018 8:32 PM
> > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang,
> > > > Qi Z <qi.z.zhang@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > > > setup
> > > >
> > > > Hi Qi,
> > > >
> > > > >
> > > > > The patch let etherdev driver expose the capability flag through
> > > > > rte_eth_dev_info_get when it support deferred queue configuraiton,
> > > > > then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> > > > > continue to setup the queue or just return fail when device
> > > > > already started.
> > > > >
> > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > ---
> > > > > doc/guides/nics/features.rst | 8 ++++++++
> > > > > lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> > > > > lib/librte_ether/rte_ethdev.h | 11 +++++++++++
> > > > > 3 files changed, 37 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/doc/guides/nics/features.rst
> > > > > b/doc/guides/nics/features.rst index 1b4fb979f..36ad21a1f 100644
> > > > > --- a/doc/guides/nics/features.rst
> > > > > +++ b/doc/guides/nics/features.rst
> > > > > @@ -892,7 +892,15 @@ Documentation describes performance
> > values.
> > > > >
> > > > > See ``dpdk.org/doc/perf/*``.
> > > > >
> > > > > +.. _nic_features_queue_deferred_setup_capabilities:
> > > > >
> > > > > +Queue deferred setup capabilities
> > > > > +---------------------------------
> > > > > +
> > > > > +Supports queue setup / release after device started.
> > > > > +
> > > > > +* **[provides] rte_eth_dev_info**:
> > > > >
> > > >
> > ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFE
> > > > RRED_
> > > > > TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> > > > > ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> > > > > +* **[related] API**: ``rte_eth_dev_info_get()``.
> > > > >
> > > > > .. _nic_features_other:
> > > > >
> > > > > diff --git a/lib/librte_ether/rte_ethdev.c
> > > > > b/lib/librte_ether/rte_ethdev.c index a6ce2a5ba..6c906c4df 100644
> > > > > --- a/lib/librte_ether/rte_ethdev.c
> > > > > +++ b/lib/librte_ether/rte_ethdev.c
> > > > > @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> > > > uint16_t rx_queue_id,
> > > > > return -EINVAL;
> > > > > }
> > > > >
> > > > > - if (dev->data->dev_started) {
> > > > > - RTE_PMD_DEBUG_TRACE(
> > > > > - "port %d must be stopped to allow configuration\n",
> > port_id);
> > > > > - return -EBUSY;
> > > > > - }
> > > > > -
> > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > > > -ENOTSUP);
> > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> > > > -ENOTSUP);
> > > > >
> > > > > @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t
> > port_id,
> > > > uint16_t rx_queue_id,
> > > > > return -EINVAL;
> > > > > }
> > > > >
> > > > > + if (dev->data->dev_started &&
> > > > > + !(dev_info.deferred_queue_config_capa &
> > > > > + DEV_DEFERRED_RX_QUEUE_SETUP))
> > > > > + return -EINVAL;
> > > > > +
> > > >
> > > > I think now you have to check here that the queue is stopped.
> > > > Otherwise you might attempt to reconfigure running queue.
> > >
> > > I'm not sure if it's necessary to let application use different API sequence
> > for a deferred configure and deferred re-configure.
> > > Can we just call dev_ops->rx_queue_stop before rx_queue_release here
> >
> > I don't follow you here.
> > Let say now inside queue_start() we do check:
> >
> > if (dev->data->rx_queue_state[rx_queue_id] !=
> > RTE_ETH_QUEUE_STATE_STOPPED)
> >
> > Right now it is not possible to call queue_setup() without dev_stop() before
> > it - that's why we have check if (dev->data->dev_started) in queue_setup()
> > right now.
> > Though with your patch it not the case anymore - user is able to call
> > queue_setup() without stopping the whole device.
> > But he still has to stop the queue.
>
> >
> > >
> > > >
> > > >
> > > > > rxq = dev->data->rx_queues;
> > > > > if (rxq[rx_queue_id]) {
> > > > >
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> > > > > -ENOTSUP);
> > > >
> > > > I don't think it is *that* straightforward.
> > > > rx_queue_setup() parameters can imply different rx function (and
> > > > related dev
> > > > icesettings) that are already setuped by previous
> > queue_setup()/dev_start.
> > > > So I think you need to do one of 2 things:
> > > > 1. rework ethdev layer to introduce a separate rx function (and
> > > > related
> > > > settings) for each queue.
> > > > 2. at rx_queue_setup() if it is invoked after dev_start - check that
> > > > given queue settings wouldn't contradict with current device
> > > > settings (rx function, etc.).
> > > > If they do - return an error.
> > > Yes, I think what we have is option 2 here, the
> > > dev_ops->rx_queue_setup will return fail if conflict with previous
> > > setting
> >
> > Hmm and what makes you think that?
> > As I know it is not the case right now.
> > Let say I do:
> > ....
> > rx_queue_setup(port=0,queue=0, mp=mb_size_2048);
> > dev_start(port=0);
> > ...
> > rx_queue_setup(port=0,queue=1,mp=mb_size_1024);
> >
> > If current rx function doesn't support multi-segs then second
> > rx_queue_setup() should fail.
> > Though I don't think that would happen with the current implementation.
>
> Why you think that would not happen? dev_ops->rx_queue_setup can fail, right?
> I mean it's the responsibility of low level driver (i40e) to check the conflict with current implementation.
Yes it is responsibility if the PMD because only it knows its own logic of rx/tx function selection.
But I don't see such changes in i40e in your patch series.
Probably I missed them?
> >
> > Same story for TX offloads, though it probably not that critical, as for most
> > Intel PMDs HW TX offloads will become per port in 18.05.
> >
> > As I can see you do have either of these options implemented right now -
> > that's the problem.
> >
> > > I'm also thinking about option 1, the idea is to move per queue rx/tx
> > function into driver layer, so it will not break existing API.
> > >
> > > 1. driver can expose the capability like per_queue_rx or per_queue_tx
> > > 2. application can enable this capability by dev_config with
> > > rte_eth_conf 3, if per_queue_rx is not enable, nothing change, so we
> > > are at option 2 4. if per_queue_rx is enabled, driver will set
> > > rx_pkt_burst with a hook function which redirect to an function ptr in
> > > a per queue rx function tables ( I guess performance is impacted
> > > somehow, but this is the cost if you want different offload for
> > > different queue)
> >
> > I don't think we need to overcomplicate things here.
> > It should be transparent to the user - user just calls queue_setup() - based on
> > its input parameters PMD selects a function that fits best.
> > Pretty much what we have right now, just possibly have an array of functions
> > (one per queue).
>
> If we don't introduce a new capability or something like, but just take per queue functions as default way,
> does that mean, we need to change all drivers to adapt this?
> Or do you mean below?
>
> If (dev->rx_pkt_burst)
> /* default way */
> else
> /* per queue function */
For me either way seems ok.
Second one probably a bit easier, as no changes from PMDs are required.
But again - might be even rte_ethdev layer can fill queue's rx_pkt_burst[] array
for the drivers that don't support it - just by copying dev->rx_pkt_burst into it.
Konstantin
>
> Regards
> Qi
>
> >
> > >
> > > >
> > > > From my perspective - 1) is a better choice though it required more
> > > > work, and possibly ABI breakage.
> > > > I did some work in that direction as RFC:
> > > > http://dpdk.org/dev/patchwork/patch/31866/
> > >
> > > I will learn this, thanks for the heads up.
> > > >
> > > > 2) might be also possible, but looks a bit clumsy as
> > > > rx_queue_setup() might now fail even with valid parameters - all
> > > > depends on previous queue configurations.
> > > >
> > > > Same story applies for TX.
> > > >
> > > >
> > > > > + if (dev->data->dev_started &&
> > > > > + !(dev_info.deferred_queue_config_capa &
> > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE))
> > > > > + return -EINVAL;
> > > > > (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> > > > > rxq[rx_queue_id] = NULL;
> > > > > }
> > > > > @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id,
> > > > uint16_t tx_queue_id,
> > > > > return -EINVAL;
> > > > > }
> > > > >
> > > > > - if (dev->data->dev_started) {
> > > > > - RTE_PMD_DEBUG_TRACE(
> > > > > - "port %d must be stopped to allow configuration\n",
> > port_id);
> > > > > - return -EBUSY;
> > > > > - }
> > > > > -
> > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > > > -ENOTSUP);
> > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup,
> > > > -ENOTSUP);
> > > > >
> > > > > @@ -1596,10 +1593,19 @@ rte_eth_tx_queue_setup(uint16_t
> > port_id,
> > > > uint16_t tx_queue_id,
> > > > > return -EINVAL;
> > > > > }
> > > > >
> > > > > + if (dev->data->dev_started &&
> > > > > + !(dev_info.deferred_queue_config_capa &
> > > > > + DEV_DEFERRED_TX_QUEUE_SETUP))
> > > > > + return -EINVAL;
> > > > > +
> > > > > txq = dev->data->tx_queues;
> > > > > if (txq[tx_queue_id]) {
> > > > >
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> > > > > -ENOTSUP);
> > > > > + if (dev->data->dev_started &&
> > > > > + !(dev_info.deferred_queue_config_capa &
> > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE))
> > > > > + return -EINVAL;
> > > > > (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]);
> > > > > txq[tx_queue_id] = NULL;
> > > > > }
> > > > > diff --git a/lib/librte_ether/rte_ethdev.h
> > > > > b/lib/librte_ether/rte_ethdev.h index 036153306..410e58c50 100644
> > > > > --- a/lib/librte_ether/rte_ethdev.h
> > > > > +++ b/lib/librte_ether/rte_ethdev.h
> > > > > @@ -981,6 +981,15 @@ struct rte_eth_conf {
> > > > > */
> > > > > #define DEV_TX_OFFLOAD_SECURITY 0x00020000
> > > > >
> > > > > +#define DEV_DEFERRED_RX_QUEUE_SETUP 0x00000001 /**<
> > Deferred
> > > > setup rx
> > > > > +queue */ #define DEV_DEFERRED_TX_QUEUE_SETUP 0x00000002
> > /**<
> > > > Deferred
> > > > > +setup tx queue */ #define DEV_DEFERRED_RX_QUEUE_RELEASE
> > > > 0x00000004
> > > > > +/**< Deferred release rx queue */ #define
> > > > > +DEV_DEFERRED_TX_QUEUE_RELEASE 0x00000008 /**< Deferred
> > release
> > > > tx
> > > > > +queue */
> > > > > +
> > > >
> > > > I don't think we do need flags for both setup a and release.
> > > > If runtime setup is supported - surely dynamic release should be
> > > > supported too.
> > > > Also probably RUNTIME_RX_QUEUE_SETUP sounds a bit better.
> > >
> > > Agree
> > >
> > > Thanks
> > > Qi
> > >
> > > >
> > > > Konstantin
> > > >
> > > > > /*
> > > > > * If new Tx offload capabilities are defined, they also must be
> > > > > * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> > > > > @@ -1029,6 +1038,8 @@ struct rte_eth_dev_info {
> > > > > /** Configured number of rx/tx queues */
> > > > > uint16_t nb_rx_queues; /**< Number of RX queues. */
> > > > > uint16_t nb_tx_queues; /**< Number of TX queues. */
> > > > > + uint64_t deferred_queue_config_capa;
> > > > > + /**< queues can be setup/release after dev_start
> > > > > +(DEV_DEFERRED_). */
> > > > > };
> > > > >
> > > > > /**
> > > > > --
> > > > > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
2018-03-15 15:38 ` Ananyev, Konstantin
@ 2018-03-16 0:42 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-16 0:42 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 11:39 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, March 15, 2018 3:09 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, March 15, 2018 9:17 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred queue
> > > setup
> > >
> > > Hi Qi,
> > >
> > > > -----Original Message-----
> > > > From: Zhang, Qi Z
> > > > Sent: Thursday, March 15, 2018 3:14 AM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred
> > > > queue setup
> > > >
> > > > Hi Konstantin:
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Wednesday, March 14, 2018 8:32 PM
> > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 1/4] ether: support deferred
> > > > > queue setup
> > > > >
> > > > > Hi Qi,
> > > > >
> > > > > >
> > > > > > The patch let etherdev driver expose the capability flag
> > > > > > through rte_eth_dev_info_get when it support deferred queue
> > > > > > configuraiton, then base on the flag
> > > > > > rte_eth_[rx|tx]_queue_setup could decide continue to setup the
> > > > > > queue or just return fail when device already started.
> > > > > >
> > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > ---
> > > > > > doc/guides/nics/features.rst | 8 ++++++++
> > > > > > lib/librte_ether/rte_ethdev.c | 30
> > > > > > ++++++++++++++++++------------ lib/librte_ether/rte_ethdev.h |
> > > > > > 11 +++++++++++
> > > > > > 3 files changed, 37 insertions(+), 12 deletions(-)
> > > > > >
> > > > > > diff --git a/doc/guides/nics/features.rst
> > > > > > b/doc/guides/nics/features.rst index 1b4fb979f..36ad21a1f
> > > > > > 100644
> > > > > > --- a/doc/guides/nics/features.rst
> > > > > > +++ b/doc/guides/nics/features.rst
> > > > > > @@ -892,7 +892,15 @@ Documentation describes performance
> > > values.
> > > > > >
> > > > > > See ``dpdk.org/doc/perf/*``.
> > > > > >
> > > > > > +.. _nic_features_queue_deferred_setup_capabilities:
> > > > > >
> > > > > > +Queue deferred setup capabilities
> > > > > > +---------------------------------
> > > > > > +
> > > > > > +Supports queue setup / release after device started.
> > > > > > +
> > > > > > +* **[provides] rte_eth_dev_info**:
> > > > > >
> > > > >
> > >
> ``deferred_queue_config_capa:DEV_DEFERRED_RX_QUEUE_SETUP,DEV_DEFE
> > > > > RRED_
> > > > > > TX_QUEUE_SETUP,DEV_DEFERRED_RX_QUEUE_RELE
> > > > > > ASE,DEV_DEFERRED_TX_QUEUE_RELEASE``.
> > > > > > +* **[related] API**: ``rte_eth_dev_info_get()``.
> > > > > >
> > > > > > .. _nic_features_other:
> > > > > >
> > > > > > diff --git a/lib/librte_ether/rte_ethdev.c
> > > > > > b/lib/librte_ether/rte_ethdev.c index a6ce2a5ba..6c906c4df
> > > > > > 100644
> > > > > > --- a/lib/librte_ether/rte_ethdev.c
> > > > > > +++ b/lib/librte_ether/rte_ethdev.c
> > > > > > @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t
> > > > > > port_id,
> > > > > uint16_t rx_queue_id,
> > > > > > return -EINVAL;
> > > > > > }
> > > > > >
> > > > > > - if (dev->data->dev_started) {
> > > > > > - RTE_PMD_DEBUG_TRACE(
> > > > > > - "port %d must be stopped to allow configuration\n",
> > > port_id);
> > > > > > - return -EBUSY;
> > > > > > - }
> > > > > > -
> > > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> > > > > -ENOTSUP);
> > > > > >
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> > > > > -ENOTSUP);
> > > > > >
> > > > > > @@ -1474,10 +1468,19 @@ rte_eth_rx_queue_setup(uint16_t
> > > port_id,
> > > > > uint16_t rx_queue_id,
> > > > > > return -EINVAL;
> > > > > > }
> > > > > >
> > > > > > + if (dev->data->dev_started &&
> > > > > > + !(dev_info.deferred_queue_config_capa &
> > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP))
> > > > > > + return -EINVAL;
> > > > > > +
> > > > >
> > > > > I think now you have to check here that the queue is stopped.
> > > > > Otherwise you might attempt to reconfigure running queue.
> > > >
> > > > I'm not sure if it's necessary to let application use different
> > > > API sequence
> > > for a deferred configure and deferred re-configure.
> > > > Can we just call dev_ops->rx_queue_stop before rx_queue_release
> > > > here
> > >
> > > I don't follow you here.
> > > Let say now inside queue_start() we do check:
> > >
> > > if (dev->data->rx_queue_state[rx_queue_id] !=
> > > RTE_ETH_QUEUE_STATE_STOPPED)
> > >
> > > Right now it is not possible to call queue_setup() without
> > > dev_stop() before it - that's why we have check if
> > > (dev->data->dev_started) in queue_setup() right now.
> > > Though with your patch it not the case anymore - user is able to
> > > call
> > > queue_setup() without stopping the whole device.
> > > But he still has to stop the queue.
> >
> > >
> > > >
> > > > >
> > > > >
> > > > > > rxq = dev->data->rx_queues;
> > > > > > if (rxq[rx_queue_id]) {
> > > > > >
> > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> > > > > > -ENOTSUP);
> > > > >
> > > > > I don't think it is *that* straightforward.
> > > > > rx_queue_setup() parameters can imply different rx function (and
> > > > > related dev
> > > > > icesettings) that are already setuped by previous
> > > queue_setup()/dev_start.
> > > > > So I think you need to do one of 2 things:
> > > > > 1. rework ethdev layer to introduce a separate rx function (and
> > > > > related
> > > > > settings) for each queue.
> > > > > 2. at rx_queue_setup() if it is invoked after dev_start - check
> > > > > that given queue settings wouldn't contradict with current
> > > > > device settings (rx function, etc.).
> > > > > If they do - return an error.
> > > > Yes, I think what we have is option 2 here, the
> > > > dev_ops->rx_queue_setup will return fail if conflict with previous
> > > > setting
> > >
> > > Hmm and what makes you think that?
> > > As I know it is not the case right now.
> > > Let say I do:
> > > ....
> > > rx_queue_setup(port=0,queue=0, mp=mb_size_2048);
> > > dev_start(port=0);
> > > ...
> > > rx_queue_setup(port=0,queue=1,mp=mb_size_1024);
> > >
> > > If current rx function doesn't support multi-segs then second
> > > rx_queue_setup() should fail.
> > > Though I don't think that would happen with the current
> implementation.
> >
> > Why you think that would not happen? dev_ops->rx_queue_setup can fail,
> right?
> > I mean it's the responsibility of low level driver (i40e) to check the conflict
> with current implementation.
>
> Yes it is responsibility if the PMD because only it knows its own logic of rx/tx
> function selection.
> But I don't see such changes in i40e in your patch series.
> Probably I missed them?
OK, I think we are aligned on this patch, and I see the problem in i40e patch, will fix.
>
> > >
> > > Same story for TX offloads, though it probably not that critical, as
> > > for most Intel PMDs HW TX offloads will become per port in 18.05.
> > >
> > > As I can see you do have either of these options implemented right
> > > now - that's the problem.
> > >
> > > > I'm also thinking about option 1, the idea is to move per queue
> > > > rx/tx
> > > function into driver layer, so it will not break existing API.
> > > >
> > > > 1. driver can expose the capability like per_queue_rx or
> > > > per_queue_tx 2. application can enable this capability by
> > > > dev_config with rte_eth_conf 3, if per_queue_rx is not enable,
> > > > nothing change, so we are at option 2 4. if per_queue_rx is
> > > > enabled, driver will set rx_pkt_burst with a hook function which
> > > > redirect to an function ptr in a per queue rx function tables ( I
> > > > guess performance is impacted somehow, but this is the cost if you
> > > > want different offload for different queue)
> > >
> > > I don't think we need to overcomplicate things here.
> > > It should be transparent to the user - user just calls queue_setup()
> > > - based on its input parameters PMD selects a function that fits best.
> > > Pretty much what we have right now, just possibly have an array of
> > > functions (one per queue).
> >
> > If we don't introduce a new capability or something like, but just
> > take per queue functions as default way, does that mean, we need to
> change all drivers to adapt this?
> > Or do you mean below?
> >
> > If (dev->rx_pkt_burst)
> > /* default way */
> > else
> > /* per queue function */
>
> For me either way seems ok.
> Second one probably a bit easier, as no changes from PMDs are required.
> But again - might be even rte_ethdev layer can fill queue's rx_pkt_burst[]
> array for the drivers that don't support it - just by copying dev->rx_pkt_burst
> into it.
> Konstantin
Ok, I will add this in v2
Thanks
Qi
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-15 15:22 ` Ananyev, Konstantin
@ 2018-03-16 0:52 ` Zhang, Qi Z
2018-03-16 9:54 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-16 0:52 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, March 15, 2018 11:22 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Thursday, March 15, 2018 2:30 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, March 15, 2018 9:23 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > queue setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Zhang, Qi Z
> > > > Sent: Thursday, March 15, 2018 3:22 AM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > queue setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > queue setup
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > To: thomas@monjalon.net
> > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > Qi
> > > > > > Z <qi.z.zhang@intel.com>
> > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > queue setup
> > > > > >
> > > > > > Expose the deferred queue configuration capability and enhance
> > > > > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation
> > > > > > when device already started.
> > > > > >
> > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > ---
> > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a
> > > > > > 100644
> > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev
> > > *dev,
> > > > > struct rte_eth_dev_info *dev_info)
> > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > +
> > > > > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX +
> 1) *
> > > > > > sizeof(uint32_t);
> > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > > > > > index
> > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > rte_eth_dev
> > > > > *dev,
> > > > > > uint16_t len, i;
> > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > + int ret = 0;
> > > > > >
> > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > I40E_MAC_X722_VF) {
> > > > > > vf =
> I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > rte_eth_dev
> > > > > *dev,
> > > > > > rxq->dcb_tc = i;
> > > > > > }
> > > > > >
> > > > > > + if (dev->data->dev_started) {
> > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > + PMD_DRV_LOG(ERR,
> > > > > > + "Failed to do RX queue initialization");
> > > > > > + return ret;
> > > > > > + }
> > > > > > + if (ad->rx_vec_allowed)
> > > > >
> > > > > Better to check what rx function is installed right now.
> > > > Yes, it should be fixed, need to return fail if any conflict
> > > > >
> > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > > > >
> > > > > I don't think it is a good idea to start/stop queue inside
> > > > > queue_setup/queue_release.
> > > > > There is special API (queue_start/queue_stop) to do this.
> > > >
> > > > The idea is if dev already started, the queue is supposed to be
> > > > started
> > > automatically after queue_setup.
> > >
> > > Why is that?
> > Because device is already started, its like a running conveyor belt, anything
> you put or replace on it just moves automatically.
>
> Why is that? :)
> You do break existing behavior.
> Right now it possible to do:
> queue_setup(); queue_setup();
> for the same queue.
> With you patch is not any more
Why not?
I think with my patch,
It assumes we can run below scenario on the same queue.
(note, I assume queue_stop/start has been moved from i40e to ethedev layer already.)
queue_setup + queue_setup + dev_start + queue_setup + queue_setup,
queue_stop/start are handled inside queue_setup automatically after dev_started?
.
> And I don't see an good reason to break existing behavior.
> What is the advantage of implicit call queue_start() implicitly from the
> queue_setup()/?
> Konstantin
>
> >
> > > Might be user doesn't want to start queue, might be he only wants to
> > > start it.
> > Use deferred_start_flag,
> > > Might be he would need to call queue_setup() once again later before
> > > starting it - based on some logic?
> > Dev_ops->queue_stop will be called first before dev_ops->queue_setup in
> rte_eth_rx|tx_queue_setup, if a queue is running.
> >
> >
> >
> > > If the user wants to setup and start the queue immediately he can always
> do:
> > >
> > > rc = queue_setup(...);
> > > if (rc == 0)
> > > queue_start(...);
> >
> > application no need to call queue_start explicitly in this case.
> >
> > >
> > > We have a pretty well defined API here let's keep it like that.
> > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 0:52 ` Zhang, Qi Z
@ 2018-03-16 9:54 ` Ananyev, Konstantin
2018-03-16 11:00 ` Bruce Richardson
` (2 more replies)
0 siblings, 3 replies; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-16 9:54 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Friday, March 16, 2018 12:52 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, March 15, 2018 11:22 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Thursday, March 15, 2018 2:30 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Thursday, March 15, 2018 9:23 PM
> > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > queue setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Zhang, Qi Z
> > > > > Sent: Thursday, March 15, 2018 3:22 AM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > queue setup
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Ananyev, Konstantin
> > > > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > queue setup
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > To: thomas@monjalon.net
> > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > Qi
> > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > > queue setup
> > > > > > >
> > > > > > > Expose the deferred queue configuration capability and enhance
> > > > > > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation
> > > > > > > when device already started.
> > > > > > >
> > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > ---
> > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a
> > > > > > > 100644
> > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev
> > > > *dev,
> > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > +
> > > > > > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX +
> > 1) *
> > > > > > > sizeof(uint32_t);
> > > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > index
> > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > > rte_eth_dev
> > > > > > *dev,
> > > > > > > uint16_t len, i;
> > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > + int ret = 0;
> > > > > > >
> > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > > I40E_MAC_X722_VF) {
> > > > > > > vf =
> > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > > rte_eth_dev
> > > > > > *dev,
> > > > > > > rxq->dcb_tc = i;
> > > > > > > }
> > > > > > >
> > > > > > > + if (dev->data->dev_started) {
> > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > + "Failed to do RX queue initialization");
> > > > > > > + return ret;
> > > > > > > + }
> > > > > > > + if (ad->rx_vec_allowed)
> > > > > >
> > > > > > Better to check what rx function is installed right now.
> > > > > Yes, it should be fixed, need to return fail if any conflict
> > > > > >
> > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > > > > >
> > > > > > I don't think it is a good idea to start/stop queue inside
> > > > > > queue_setup/queue_release.
> > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > >
> > > > > The idea is if dev already started, the queue is supposed to be
> > > > > started
> > > > automatically after queue_setup.
> > > >
> > > > Why is that?
> > > Because device is already started, its like a running conveyor belt, anything
> > you put or replace on it just moves automatically.
> >
> > Why is that? :)
> > You do break existing behavior.
> > Right now it possible to do:
> > queue_setup(); queue_setup();
> > for the same queue.
> > With you patch is not any more
> Why not?
> I think with my patch,
> It assumes we can run below scenario on the same queue.
> (note, I assume queue_stop/start has been moved from i40e to ethedev layer already.)
> queue_setup + queue_setup + dev_start + queue_setup + queue_setup,
Because you can't do queue_setup() on already started queue.
So if you do start() inside setup() second setup() should fail.
> queue_stop/start are handled inside queue_setup automatically after dev_started?
Again - I don't see any advantages to change existing API behavior and introduce implicit
start/stop inside setup.
It only introduce extra confusion for the users.
So I still think we better keep existing behavior.
Konstantin
> .
> > And I don't see an good reason to break existing behavior.
> > What is the advantage of implicit call queue_start() implicitly from the
> > queue_setup()/?
> > Konstantin
> >
> > >
> > > > Might be user doesn't want to start queue, might be he only wants to
> > > > start it.
> > > Use deferred_start_flag,
> > > > Might be he would need to call queue_setup() once again later before
> > > > starting it - based on some logic?
> > > Dev_ops->queue_stop will be called first before dev_ops->queue_setup in
> > rte_eth_rx|tx_queue_setup, if a queue is running.
> > >
> > >
> > >
> > > > If the user wants to setup and start the queue immediately he can always
> > do:
> > > >
> > > > rc = queue_setup(...);
> > > > if (rc == 0)
> > > > queue_start(...);
> > >
> > > application no need to call queue_start explicitly in this case.
> > >
> > > >
> > > > We have a pretty well defined API here let's keep it like that.
> > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 9:54 ` Ananyev, Konstantin
@ 2018-03-16 11:00 ` Bruce Richardson
2018-03-16 13:18 ` Zhang, Qi Z
2018-03-16 14:15 ` Zhang, Qi Z
2 siblings, 0 replies; 95+ messages in thread
From: Bruce Richardson @ 2018-03-16 11:00 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Zhang, Qi Z, thomas, dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
On Fri, Mar 16, 2018 at 09:54:03AM +0000, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Friday, March 16, 2018 12:52 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, March 15, 2018 11:22 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Zhang, Qi Z
> > > > Sent: Thursday, March 15, 2018 2:30 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > > > setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Thursday, March 15, 2018 9:23 PM
> > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > queue setup
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Zhang, Qi Z
> > > > > > Sent: Thursday, March 15, 2018 3:22 AM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > thomas@monjalon.net
> > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > <wenzhuo.lu@intel.com>
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > queue setup
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Ananyev, Konstantin
> > > > > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > > queue setup
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> > > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > > To: thomas@monjalon.net
> > > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > > Qi
> > > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > > > > queue setup
> > > > > > > >
> > > > > > > > Expose the deferred queue configuration capability and enhance
> > > > > > > > i40e_dev_[rx|tx]_queue_[setup|release] to handle the situation
> > > > > > > > when device already started.
> > > > > > > >
> > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > ---
> > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index 06b0f03a1..843a0c42a
> > > > > > > > 100644
> > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct rte_eth_dev
> > > > > *dev,
> > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > +
> > > > > > > > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX +
> > > 1) *
> > > > > > > > sizeof(uint32_t);
> > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > > > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > index
> > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > > > rte_eth_dev
> > > > > > > *dev,
> > > > > > > > uint16_t len, i;
> > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > + int ret = 0;
> > > > > > > >
> > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > vf =
> > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > > > rte_eth_dev
> > > > > > > *dev,
> > > > > > > > rxq->dcb_tc = i;
> > > > > > > > }
> > > > > > > >
> > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > + "Failed to do RX queue initialization");
> > > > > > > > + return ret;
> > > > > > > > + }
> > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > >
> > > > > > > Better to check what rx function is installed right now.
> > > > > > Yes, it should be fixed, need to return fail if any conflict
> > > > > > >
> > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > > > > > >
> > > > > > > I don't think it is a good idea to start/stop queue inside
> > > > > > > queue_setup/queue_release.
> > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > >
> > > > > > The idea is if dev already started, the queue is supposed to be
> > > > > > started
> > > > > automatically after queue_setup.
> > > > >
> > > > > Why is that?
> > > > Because device is already started, its like a running conveyor belt, anything
> > > you put or replace on it just moves automatically.
> > >
> > > Why is that? :)
> > > You do break existing behavior.
> > > Right now it possible to do:
> > > queue_setup(); queue_setup();
> > > for the same queue.
> > > With you patch is not any more
> > Why not?
> > I think with my patch,
> > It assumes we can run below scenario on the same queue.
> > (note, I assume queue_stop/start has been moved from i40e to ethedev layer already.)
> > queue_setup + queue_setup + dev_start + queue_setup + queue_setup,
>
> Because you can't do queue_setup() on already started queue.
> So if you do start() inside setup() second setup() should fail.
>
> > queue_stop/start are handled inside queue_setup automatically after dev_started?
>
> Again - I don't see any advantages to change existing API behavior and introduce implicit
> start/stop inside setup.
> It only introduce extra confusion for the users.
> So I still think we better keep existing behavior.
> Konstantin
>
+1 for keeping existing behaviour unless there is a compelling reason to
change.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 9:54 ` Ananyev, Konstantin
2018-03-16 11:00 ` Bruce Richardson
@ 2018-03-16 13:18 ` Zhang, Qi Z
2018-03-16 14:15 ` Zhang, Qi Z
2 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-16 13:18 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Friday, March 16, 2018 5:54 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Friday, March 16, 2018 12:52 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, March 15, 2018 11:22 PM
> > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > queue setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Zhang, Qi Z
> > > > Sent: Thursday, March 15, 2018 2:30 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > queue setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Thursday, March 15, 2018 9:23 PM
> > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > queue setup
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Zhang, Qi Z
> > > > > > Sent: Thursday, March 15, 2018 3:22 AM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > thomas@monjalon.net
> > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > <wenzhuo.lu@intel.com>
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > deferred queue setup
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Ananyev, Konstantin
> > > > > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > deferred queue setup
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi
> > > > > > > > Zhang
> > > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > > To: thomas@monjalon.net
> > > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>;
> > > > > > > > Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > > Qi
> > > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > > deferred queue setup
> > > > > > > >
> > > > > > > > Expose the deferred queue configuration capability and
> > > > > > > > enhance i40e_dev_[rx|tx]_queue_[setup|release] to handle
> > > > > > > > the situation when device already started.
> > > > > > > >
> > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > ---
> > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > 100644
> > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > rte_eth_dev
> > > > > *dev,
> > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > +
> > > > > > > > dev_info->hash_key_size =
> (I40E_PFQF_HKEY_MAX_INDEX +
> > > 1) *
> > > > > > > > sizeof(uint32_t);
> > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > > > a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > > > rte_eth_dev
> > > > > > > *dev,
> > > > > > > > uint16_t len, i;
> > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > + int ret = 0;
> > > > > > > >
> > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > vf =
> > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > > > rte_eth_dev
> > > > > > > *dev,
> > > > > > > > rxq->dcb_tc = i;
> > > > > > > > }
> > > > > > > >
> > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > + "Failed to do RX queue initialization");
> > > > > > > > + return ret;
> > > > > > > > + }
> > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > >
> > > > > > > Better to check what rx function is installed right now.
> > > > > > Yes, it should be fixed, need to return fail if any conflict
> > > > > > >
> > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > + ret = i40e_dev_rx_queue_start(dev, queue_idx);
> > > > > > >
> > > > > > > I don't think it is a good idea to start/stop queue inside
> > > > > > > queue_setup/queue_release.
> > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > >
> > > > > > The idea is if dev already started, the queue is supposed to
> > > > > > be started
> > > > > automatically after queue_setup.
> > > > >
> > > > > Why is that?
> > > > Because device is already started, its like a running conveyor
> > > > belt, anything
> > > you put or replace on it just moves automatically.
> > >
> > > Why is that? :)
> > > You do break existing behavior.
> > > Right now it possible to do:
> > > queue_setup(); queue_setup();
> > > for the same queue.
> > > With you patch is not any more
> > Why not?
> > I think with my patch,
> > It assumes we can run below scenario on the same queue.
> > (note, I assume queue_stop/start has been moved from i40e to ethedev
> > layer already.) queue_setup + queue_setup + dev_start + queue_setup +
> > queue_setup,
>
> Because you can't do queue_setup() on already started queue.
> So if you do start() inside setup() second setup() should fail.
NO, because in queue_release, it will call queue_stop
And as I said before, it's better to move to queue_stop in ether layer, it's not an issue.
>
> > queue_stop/start are handled inside queue_setup automatically after
> dev_started?
>
> Again - I don't see any advantages to change existing API behavior and
> introduce implicit start/stop inside setup.
> It only introduce extra confusion for the users.
> So I still think we better keep existing behavior.
> Konstantin
OK, let me try again :)
I think the patch try to keep deferred setup independent of deferred start
Deferred setup does not necessary to imply a deferred start.
Which means
Queue_setup + dev_start = dev_start + queue_setup
Queue_setup(deferred) + dev_start + queue_start = dev_start + queue_setup(deferred) + queue_start.
Queue_setup + dev_start + queue_setup(same queue) = dev_start + queue_setup + queue_setup(same queue)
But not
Queue_setup + dev_start = dev_start+ queue_setup + queue_start
Queue_setup(deffered) + dev_start +qeueu_start = dev_start+ queue_setup (ignore deferred)+ queue_start
Queue_setup + dev_start + queue_setup(same queue) = dev_start + queue_setup + queue_stop + queue_setup + queue_start.
I think option 1 have the pattern and easy to understand and option2 just add unnecessary queue_start/queue_stop and make deferred_start redundant at some situation.
>
> > .
> > > And I don't see an good reason to break existing behavior.
I don't think it break any exist behavior, again deferred setup does not imply deferred start, because dev_start imply queue_start, and we follow this logic.
> > > What is the advantage of implicit call queue_start() implicitly from
> > > the queue_setup()/?
> > > Konstantin
> > >
> > > >
> > > > > Might be user doesn't want to start queue, might be he only
> > > > > wants to start it.
> > > > Use deferred_start_flag,
> > > > > Might be he would need to call queue_setup() once again later
> > > > > before starting it - based on some logic?
> > > > Dev_ops->queue_stop will be called first before
> > > > dev_ops->queue_setup in
> > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > >
> > > >
> > > >
> > > > > If the user wants to setup and start the queue immediately he
> > > > > can always
> > > do:
> > > > >
> > > > > rc = queue_setup(...);
> > > > > if (rc == 0)
> > > > > queue_start(...);
> > > >
> > > > application no need to call queue_start explicitly in this case.
> > > >
> > > > >
> > > > > We have a pretty well defined API here let's keep it like that.
> > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 9:54 ` Ananyev, Konstantin
2018-03-16 11:00 ` Bruce Richardson
2018-03-16 13:18 ` Zhang, Qi Z
@ 2018-03-16 14:15 ` Zhang, Qi Z
2018-03-16 18:47 ` Ananyev, Konstantin
2 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-16 14:15 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Friday, March 16, 2018 9:18 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Friday, March 16, 2018 5:54 PM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Friday, March 16, 2018 12:52 AM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > thomas@monjalon.net
> > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > queue setup
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Thursday, March 15, 2018 11:22 PM
> > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > <wenzhuo.lu@intel.com>
> > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > queue setup
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Zhang, Qi Z
> > > > > Sent: Thursday, March 15, 2018 2:30 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > thomas@monjalon.net
> > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > <wenzhuo.lu@intel.com>
> > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred
> > > > > queue setup
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Ananyev, Konstantin
> > > > > > Sent: Thursday, March 15, 2018 9:23 PM
> > > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > <wenzhuo.lu@intel.com>
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > deferred queue setup
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Zhang, Qi Z
> > > > > > > Sent: Thursday, March 15, 2018 3:22 AM
> > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > thomas@monjalon.net
> > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu,
> > > > > > > Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > <wenzhuo.lu@intel.com>
> > > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > deferred queue setup
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Ananyev, Konstantin
> > > > > > > > Sent: Wednesday, March 14, 2018 8:36 PM
> > > > > > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>;
> > > > > > > > thomas@monjalon.net
> > > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>;
> > > > > > > > Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> > > > > > > > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > > deferred queue setup
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi
> > > > > > > > > Zhang
> > > > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > > > To: thomas@monjalon.net
> > > > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>;
> > > > > > > > > Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > > > Qi
> > > > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > > > deferred queue setup
> > > > > > > > >
> > > > > > > > > Expose the deferred queue configuration capability and
> > > > > > > > > enhance i40e_dev_[rx|tx]_queue_[setup|release] to handle
> > > > > > > > > the situation when device already started.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > > ---
> > > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > > 100644
> > > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > > rte_eth_dev
> > > > > > *dev,
> > > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > > +
> > > > > > > > > dev_info->hash_key_size =
> > (I40E_PFQF_HKEY_MAX_INDEX +
> > > > 1) *
> > > > > > > > > sizeof(uint32_t);
> > > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > > > > a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > > > > rte_eth_dev
> > > > > > > > *dev,
> > > > > > > > > uint16_t len, i;
> > > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > > + int ret = 0;
> > > > > > > > >
> > > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > > vf =
> > > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > > > > rte_eth_dev
> > > > > > > > *dev,
> > > > > > > > > rxq->dcb_tc = i;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > > + "Failed to do RX queue initialization");
> > > > > > > > > + return ret;
> > > > > > > > > + }
> > > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > > >
> > > > > > > > Better to check what rx function is installed right now.
> > > > > > > Yes, it should be fixed, need to return fail if any conflict
> > > > > > > >
> > > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > > + ret = i40e_dev_rx_queue_start(dev,
> queue_idx);
> > > > > > > >
> > > > > > > > I don't think it is a good idea to start/stop queue inside
> > > > > > > > queue_setup/queue_release.
> > > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > > >
> > > > > > > The idea is if dev already started, the queue is supposed to
> > > > > > > be started
> > > > > > automatically after queue_setup.
> > > > > >
> > > > > > Why is that?
> > > > > Because device is already started, its like a running conveyor
> > > > > belt, anything
> > > > you put or replace on it just moves automatically.
> > > >
> > > > Why is that? :)
> > > > You do break existing behavior.
> > > > Right now it possible to do:
> > > > queue_setup(); queue_setup();
> > > > for the same queue.
> > > > With you patch is not any more
> > > Why not?
> > > I think with my patch,
> > > It assumes we can run below scenario on the same queue.
> > > (note, I assume queue_stop/start has been moved from i40e to ethedev
> > > layer already.) queue_setup + queue_setup + dev_start + queue_setup
> > > + queue_setup,
> >
> > Because you can't do queue_setup() on already started queue.
> > So if you do start() inside setup() second setup() should fail.
> NO, because in queue_release, it will call queue_stop And as I said before, it's
> better to move to queue_stop in ether layer, it's not an issue.
> >
> > > queue_stop/start are handled inside queue_setup automatically after
> > dev_started?
> >
> > Again - I don't see any advantages to change existing API behavior and
> > introduce implicit start/stop inside setup.
> > It only introduce extra confusion for the users.
> > So I still think we better keep existing behavior.
> > Konstantin
>
> OK, let me try again :)
> I think the patch try to keep deferred setup independent of deferred start
> Deferred setup does not necessary to imply a deferred start.
> Which means
> Queue_setup + dev_start = dev_start + queue_setup
> Queue_setup(deferred) + dev_start + queue_start = dev_start +
> queue_setup(deferred) + queue_start.
> Queue_setup + dev_start + queue_setup(same queue) = dev_start +
> queue_setup + queue_setup(same queue)
>
One mistake for the third item, It should be
Queue_setup + Queue_setup(same queue) + dev_start = queue_setup + dev_start + queue_setup(same queue)
> But not
> Queue_setup + dev_start = dev_start+ queue_setup + queue_start
> Queue_setup(deffered) + dev_start +qeueu_start = dev_start+ queue_setup
> (ignore deferred)+ queue_start Queue_setup + dev_start +
> queue_setup(same queue) = dev_start + queue_setup + queue_stop +
> queue_setup + queue_start.
Third item should be
Queue_setup + Queue_setup(same queue) + dev_start = queue_setup + dev_start + queue_stop + queue_setup(same queue) + queue_start
>
> I think option 1 have the pattern and easy to understand and option2 just
> add unnecessary queue_start/queue_stop and make deferred_start
> redundant at some situation.
> >
> > > .
> > > > And I don't see an good reason to break existing behavior.
> I don't think it break any exist behavior, again deferred setup does not imply
> deferred start, because dev_start imply queue_start, and we follow this logic.
>
> > > > What is the advantage of implicit call queue_start() implicitly
> > > > from the queue_setup()/?
> > > > Konstantin
> > > >
> > > > >
> > > > > > Might be user doesn't want to start queue, might be he only
> > > > > > wants to start it.
> > > > > Use deferred_start_flag,
> > > > > > Might be he would need to call queue_setup() once again later
> > > > > > before starting it - based on some logic?
> > > > > Dev_ops->queue_stop will be called first before
> > > > > dev_ops->queue_setup in
> > > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > > >
> > > > >
> > > > >
> > > > > > If the user wants to setup and start the queue immediately he
> > > > > > can always
> > > > do:
> > > > > >
> > > > > > rc = queue_setup(...);
> > > > > > if (rc == 0)
> > > > > > queue_start(...);
> > > > >
> > > > > application no need to call queue_start explicitly in this case.
> > > > >
> > > > > >
> > > > > > We have a pretty well defined API here let's keep it like that.
> > > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 14:15 ` Zhang, Qi Z
@ 2018-03-16 18:47 ` Ananyev, Konstantin
2018-03-18 7:55 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-16 18:47 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Friday, March 16, 2018 2:15 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
>
>
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi
> > > > > > > > > > Zhang
> > > > > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > > > > To: thomas@monjalon.net
> > > > > > > > > > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>;
> > > > > > > > > > Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > > > > Qi
> > > > > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > > > > deferred queue setup
> > > > > > > > > >
> > > > > > > > > > Expose the deferred queue configuration capability and
> > > > > > > > > > enhance i40e_dev_[rx|tx]_queue_[setup|release] to handle
> > > > > > > > > > the situation when device already started.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > > > ---
> > > > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > > > >
> > > > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > > > 100644
> > > > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > > > rte_eth_dev
> > > > > > > *dev,
> > > > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > > > +
> > > > > > > > > > dev_info->hash_key_size =
> > > (I40E_PFQF_HKEY_MAX_INDEX +
> > > > > 1) *
> > > > > > > > > > sizeof(uint32_t);
> > > > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff --git
> > > > > > > > > > a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct
> > > > > > > rte_eth_dev
> > > > > > > > > *dev,
> > > > > > > > > > uint16_t len, i;
> > > > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > > > + int ret = 0;
> > > > > > > > > >
> > > > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> > > > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > > > vf =
> > > > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > > > @@ -1841,6 +1842,25 @@ i40e_dev_rx_queue_setup(struct
> > > > > > > rte_eth_dev
> > > > > > > > > *dev,
> > > > > > > > > > rxq->dcb_tc = i;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > > > + "Failed to do RX queue initialization");
> > > > > > > > > > + return ret;
> > > > > > > > > > + }
> > > > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > > > >
> > > > > > > > > Better to check what rx function is installed right now.
> > > > > > > > Yes, it should be fixed, need to return fail if any conflict
> > > > > > > > >
> > > > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > > > + ret = i40e_dev_rx_queue_start(dev,
> > queue_idx);
> > > > > > > > >
> > > > > > > > > I don't think it is a good idea to start/stop queue inside
> > > > > > > > > queue_setup/queue_release.
> > > > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > > > >
> > > > > > > > The idea is if dev already started, the queue is supposed to
> > > > > > > > be started
> > > > > > > automatically after queue_setup.
> > > > > > >
> > > > > > > Why is that?
> > > > > > Because device is already started, its like a running conveyor
> > > > > > belt, anything
> > > > > you put or replace on it just moves automatically.
> > > > >
> > > > > Why is that? :)
> > > > > You do break existing behavior.
> > > > > Right now it possible to do:
> > > > > queue_setup(); queue_setup();
> > > > > for the same queue.
> > > > > With you patch is not any more
> > > > Why not?
> > > > I think with my patch,
> > > > It assumes we can run below scenario on the same queue.
> > > > (note, I assume queue_stop/start has been moved from i40e to ethedev
> > > > layer already.) queue_setup + queue_setup + dev_start + queue_setup
> > > > + queue_setup,
> > >
> > > Because you can't do queue_setup() on already started queue.
> > > So if you do start() inside setup() second setup() should fail.
> > NO, because in queue_release, it will call queue_stop And as I said before, it's
> > better to move to queue_stop in ether layer, it's not an issue.
> > >
> > > > queue_stop/start are handled inside queue_setup automatically after
> > > dev_started?
> > >
> > > Again - I don't see any advantages to change existing API behavior and
> > > introduce implicit start/stop inside setup.
> > > It only introduce extra confusion for the users.
> > > So I still think we better keep existing behavior.
> > > Konstantin
> >
> > OK, let me try again :)
> > I think the patch try to keep deferred setup independent of deferred start
> > Deferred setup does not necessary to imply a deferred start.
I don't understand what means 'deferred setup'.
We do have deferred_start for queue config, but it only used by dev_start().
Please, stop imply anything.
We have an API which is quite straightforward and does exactly what it states.
- queue_setup() - if queue is not started, then setup the queue.
- queue_start() - if queue is not started, then start the queue.
- queue_stop() - if queue is started, then stop the queue.
- dev_start() - in terms of queue behavior
for all configured queues; do
if queue->deferred_start != 0; then queue_start(queue);
done
Let's keep it like that - nice and simple.
No need to introduce such no-sense as 'deferred setup' or implicit stop in start.
That just would add more mess and confusion.
> > Which means
> > Queue_setup + dev_start = dev_start + queue_setup
> > Queue_setup(deferred) + dev_start + queue_start = dev_start +
> > queue_setup(deferred) + queue_start.
> > Queue_setup + dev_start + queue_setup(same queue) = dev_start +
> > queue_setup + queue_setup(same queue)
> >
>
> One mistake for the third item, It should be
> Queue_setup + Queue_setup(same queue) + dev_start = queue_setup + dev_start + queue_setup(same queue)
>
> > But not
> > Queue_setup + dev_start = dev_start+ queue_setup + queue_start
> > Queue_setup(deffered) + dev_start +qeueu_start = dev_start+ queue_setup
> > (ignore deferred)+ queue_start Queue_setup + dev_start +
> > queue_setup(same queue) = dev_start + queue_setup + queue_stop +
> > queue_setup + queue_start.
>
> Third item should be
> Queue_setup + Queue_setup(same queue) + dev_start = queue_setup + dev_start + queue_stop + queue_setup(same queue) + queue_start
> >
> > I think option 1 have the pattern and easy to understand
I don't think so.
>From my perspective it just introduce more confusion to the user.
> and option2 just add unnecessary queue_start/queue_stop
Why unnecessary - if the user wants to start the queue - he/she calls queue_start(),
It is obvious, isn't it?
> and make deferred_start redundant at some situation.
Deferred start is used only by dev_start, that's what it was intended for.
Let it stay that way.
BTW, we can get rid of it and add to dev_start() as a parameter
a list of queues to start (not to start) - would be great.
But that's the matter of different discussion, I think.
Konstantin
> > >
> > > > .
> > > > > And I don't see an good reason to break existing behavior.
> > I don't think it break any exist behavior, again deferred setup does not imply
> > deferred start, because dev_start imply queue_start, and we follow this logic.
> >
> > > > > What is the advantage of implicit call queue_start() implicitly
> > > > > from the queue_setup()/?
> > > > > Konstantin
> > > > >
> > > > > >
> > > > > > > Might be user doesn't want to start queue, might be he only
> > > > > > > wants to start it.
> > > > > > Use deferred_start_flag,
> > > > > > > Might be he would need to call queue_setup() once again later
> > > > > > > before starting it - based on some logic?
> > > > > > Dev_ops->queue_stop will be called first before
> > > > > > dev_ops->queue_setup in
> > > > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > > > >
> > > > > >
> > > > > >
> > > > > > > If the user wants to setup and start the queue immediately he
> > > > > > > can always
> > > > > do:
> > > > > > >
> > > > > > > rc = queue_setup(...);
> > > > > > > if (rc == 0)
> > > > > > > queue_start(...);
> > > > > >
> > > > > > application no need to call queue_start explicitly in this case.
> > > > > >
> > > > > > >
> > > > > > > We have a pretty well defined API here let's keep it like that.
> > > > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-16 18:47 ` Ananyev, Konstantin
@ 2018-03-18 7:55 ` Zhang, Qi Z
2018-03-20 13:18 ` Ananyev, Konstantin
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-18 7:55 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Saturday, March 17, 2018 2:48 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > -----Original Message-----
> > From: Zhang, Qi Z
> > Sent: Friday, March 16, 2018 2:15 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net
> > Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> > setup
> >
> >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> > > > > > > > > > > Qi Zhang
> > > > > > > > > > > Sent: Friday, March 2, 2018 4:13 AM
> > > > > > > > > > > To: thomas@monjalon.net
> > > > > > > > > > > Cc: dev@dpdk.org; Xing, Beilei
> > > > > > > > > > > <beilei.xing@intel.com>; Wu, Jingjing
> > > > > > > > > > > <jingjing.wu@intel.com>; Lu, Wenzhuo
> > > > > > > > > > > <wenzhuo.lu@intel.com>; Zhang,
> > > > > > > > > > Qi
> > > > > > > > > > > Z <qi.z.zhang@intel.com>
> > > > > > > > > > > Subject: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable
> > > > > > > > > > > deferred queue setup
> > > > > > > > > > >
> > > > > > > > > > > Expose the deferred queue configuration capability
> > > > > > > > > > > and enhance i40e_dev_[rx|tx]_queue_[setup|release]
> > > > > > > > > > > to handle the situation when device already started.
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > > > > ---
> > > > > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > > > > >
> > > > > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > > > > 100644
> > > > > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > > > > rte_eth_dev
> > > > > > > > *dev,
> > > > > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > > > > +
> > > > > > > > > > > dev_info->hash_key_size =
> > > > (I40E_PFQF_HKEY_MAX_INDEX +
> > > > > > 1) *
> > > > > > > > > > > sizeof(uint32_t);
> > > > > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff
> > > > > > > > > > > --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > @@ -1712,6 +1712,7 @@
> i40e_dev_rx_queue_setup(struct
> > > > > > > > rte_eth_dev
> > > > > > > > > > *dev,
> > > > > > > > > > > uint16_t len, i;
> > > > > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > > > > + int ret = 0;
> > > > > > > > > > >
> > > > > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type
> ==
> > > > > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > > > > vf =
> > > > > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > > > > @@ -1841,6 +1842,25 @@
> > > > > > > > > > > i40e_dev_rx_queue_setup(struct
> > > > > > > > rte_eth_dev
> > > > > > > > > > *dev,
> > > > > > > > > > > rxq->dcb_tc = i;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > > > > + "Failed to do RX queue
> initialization");
> > > > > > > > > > > + return ret;
> > > > > > > > > > > + }
> > > > > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > > > > >
> > > > > > > > > > Better to check what rx function is installed right now.
> > > > > > > > > Yes, it should be fixed, need to return fail if any
> > > > > > > > > conflict
> > > > > > > > > >
> > > > > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > > > > + ret = i40e_dev_rx_queue_start(dev,
> > > queue_idx);
> > > > > > > > > >
> > > > > > > > > > I don't think it is a good idea to start/stop queue
> > > > > > > > > > inside queue_setup/queue_release.
> > > > > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > > > > >
> > > > > > > > > The idea is if dev already started, the queue is
> > > > > > > > > supposed to be started
> > > > > > > > automatically after queue_setup.
> > > > > > > >
> > > > > > > > Why is that?
> > > > > > > Because device is already started, its like a running
> > > > > > > conveyor belt, anything
> > > > > > you put or replace on it just moves automatically.
> > > > > >
> > > > > > Why is that? :)
> > > > > > You do break existing behavior.
> > > > > > Right now it possible to do:
> > > > > > queue_setup(); queue_setup();
> > > > > > for the same queue.
> > > > > > With you patch is not any more
> > > > > Why not?
> > > > > I think with my patch,
> > > > > It assumes we can run below scenario on the same queue.
> > > > > (note, I assume queue_stop/start has been moved from i40e to
> > > > > ethedev layer already.) queue_setup + queue_setup + dev_start +
> > > > > queue_setup
> > > > > + queue_setup,
> > > >
> > > > Because you can't do queue_setup() on already started queue.
> > > > So if you do start() inside setup() second setup() should fail.
> > > NO, because in queue_release, it will call queue_stop And as I said
> > > before, it's better to move to queue_stop in ether layer, it's not an issue.
> > > >
> > > > > queue_stop/start are handled inside queue_setup automatically
> > > > > after
> > > > dev_started?
> > > >
> > > > Again - I don't see any advantages to change existing API behavior
> > > > and introduce implicit start/stop inside setup.
> > > > It only introduce extra confusion for the users.
> > > > So I still think we better keep existing behavior.
> > > > Konstantin
> > >
> > > OK, let me try again :)
> > > I think the patch try to keep deferred setup independent of deferred
> > > start Deferred setup does not necessary to imply a deferred start.
>
> I don't understand what means 'deferred setup'.
> We do have deferred_start for queue config, but it only used by dev_start().
> Please, stop imply anything.
> We have an API which is quite straightforward and does exactly what it
> states.
>
> - queue_setup() - if queue is not started, then setup the queue.
> - queue_start() - if queue is not started, then start the queue.
> - queue_stop() - if queue is started, then stop the queue.
> - dev_start() - in terms of queue behavior
> for all configured queues; do
> if queue->deferred_start != 0; then queue_start(queue);
> done
>
> Let's keep it like that - nice and simple.
Yes, let's keep it nice and simple at dev_ops layer,.
But etherdev layer should be more friendly to application, we need imply something.
For example, why we don't expose queue_release to ether layer,
Why queue_setup imply a queue_release on a queue already be setup?
Shouldn't it return fail to warn user, that a queue can't be reconfigure without release if first?
I thinks it's the same pattern for why we have queue_stop / queue_start here.
if application want to setup a queue on a running device, of cause it want queue be started immediately (if not? It can use deferred_start)
if application want to re_setup a queue on a running device, of cause it want queue can be stopped first.
Why we set unnecessary barriers here?
> No need to introduce such no-sense as 'deferred setup' or implicit stop in
> start.
> That just would add more mess and confusion.
>
> > > Which means
> > > Queue_setup + dev_start = dev_start + queue_setup
> > > Queue_setup(deferred) + dev_start + queue_start = dev_start +
> > > queue_setup(deferred) + queue_start.
> > > Queue_setup + dev_start + queue_setup(same queue) = dev_start +
> > > queue_setup + queue_setup(same queue)
> > >
> >
> > One mistake for the third item, It should be Queue_setup +
> > Queue_setup(same queue) + dev_start = queue_setup + dev_start +
> > queue_setup(same queue)
> >
> > > But not
> > > Queue_setup + dev_start = dev_start+ queue_setup + queue_start
> > > Queue_setup(deffered) + dev_start +qeueu_start = dev_start+
> > > queue_setup (ignore deferred)+ queue_start Queue_setup + dev_start +
> > > queue_setup(same queue) = dev_start + queue_setup + queue_stop +
> > > queue_setup + queue_start.
> >
> > Third item should be
> > Queue_setup + Queue_setup(same queue) + dev_start = queue_setup +
> > dev_start + queue_stop + queue_setup(same queue) + queue_start
> > >
> > > I think option 1 have the pattern and easy to understand
>
> I don't think so.
> From my perspective it just introduce more confusion to the user.
I can't agree this, actually it's quite simple to use the APIs.
User just need to remember, now, it's free to re-order queue_setup and dev_start, both call sequence reach the same destination.
And if user does want to control queue start at specific time, just use deferred_start_flag and call queue_start explicitly as unusually, nothing changes
Actually I agree with what Bruce said:
"keeping existing behavior unless there is a compelling reason to change"
The patch does try to keep consistent behavior from user's view.
Regards
Qi
>
> > and option2 just add unnecessary queue_start/queue_stop
>
> Why unnecessary - if the user wants to start the queue - he/she calls
> queue_start(), It is obvious, isn't it?
>
> > and make deferred_start redundant at some situation.
>
> Deferred start is used only by dev_start, that's what it was intended for.
> Let it stay that way.
> BTW, we can get rid of it and add to dev_start() as a parameter a list of
> queues to start (not to start) - would be great.
> But that's the matter of different discussion, I think.
>
> Konstantin
>
> > > >
> > > > > .
> > > > > > And I don't see an good reason to break existing behavior.
> > > I don't think it break any exist behavior, again deferred setup does
> > > not imply deferred start, because dev_start imply queue_start, and we
> follow this logic.
> > >
> > > > > > What is the advantage of implicit call queue_start()
> > > > > > implicitly from the queue_setup()/?
> > > > > > Konstantin
> > > > > >
> > > > > > >
> > > > > > > > Might be user doesn't want to start queue, might be he
> > > > > > > > only wants to start it.
> > > > > > > Use deferred_start_flag,
> > > > > > > > Might be he would need to call queue_setup() once again
> > > > > > > > later before starting it - based on some logic?
> > > > > > > Dev_ops->queue_stop will be called first before
> > > > > > > dev_ops->queue_setup in
> > > > > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > If the user wants to setup and start the queue immediately
> > > > > > > > he can always
> > > > > > do:
> > > > > > > >
> > > > > > > > rc = queue_setup(...);
> > > > > > > > if (rc == 0)
> > > > > > > > queue_start(...);
> > > > > > >
> > > > > > > application no need to call queue_start explicitly in this case.
> > > > > > >
> > > > > > > >
> > > > > > > > We have a pretty well defined API here let's keep it like that.
> > > > > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-18 7:55 ` Zhang, Qi Z
@ 2018-03-20 13:18 ` Ananyev, Konstantin
2018-03-21 1:53 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-20 13:18 UTC (permalink / raw)
To: Zhang, Qi Z; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> > > > > > > > > > > >
> > > > > > > > > > > > Expose the deferred queue configuration capability
> > > > > > > > > > > > and enhance i40e_dev_[rx|tx]_queue_[setup|release]
> > > > > > > > > > > > to handle the situation when device already started.
> > > > > > > > > > > >
> > > > > > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > > > > > ---
> > > > > > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > > > > > 2 files changed, 66 insertions(+), 2 deletions(-)
> > > > > > > > > > > >
> > > > > > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > > > > > 100644
> > > > > > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > > > > > rte_eth_dev
> > > > > > > > > *dev,
> > > > > > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > > > > > +
> > > > > > > > > > > > dev_info->hash_key_size =
> > > > > (I40E_PFQF_HKEY_MAX_INDEX +
> > > > > > > 1) *
> > > > > > > > > > > > sizeof(uint32_t);
> > > > > > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff
> > > > > > > > > > > > --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > @@ -1712,6 +1712,7 @@
> > i40e_dev_rx_queue_setup(struct
> > > > > > > > > rte_eth_dev
> > > > > > > > > > > *dev,
> > > > > > > > > > > > uint16_t len, i;
> > > > > > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > > > > > + int ret = 0;
> > > > > > > > > > > >
> > > > > > > > > > > > if (hw->mac.type == I40E_MAC_VF || hw->mac.type
> > ==
> > > > > > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > > > > > vf =
> > > > > > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > > > > > @@ -1841,6 +1842,25 @@
> > > > > > > > > > > > i40e_dev_rx_queue_setup(struct
> > > > > > > > > rte_eth_dev
> > > > > > > > > > > *dev,
> > > > > > > > > > > > rxq->dcb_tc = i;
> > > > > > > > > > > > }
> > > > > > > > > > > >
> > > > > > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > > > > > + "Failed to do RX queue
> > initialization");
> > > > > > > > > > > > + return ret;
> > > > > > > > > > > > + }
> > > > > > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > > > > > >
> > > > > > > > > > > Better to check what rx function is installed right now.
> > > > > > > > > > Yes, it should be fixed, need to return fail if any
> > > > > > > > > > conflict
> > > > > > > > > > >
> > > > > > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > > > > > + ret = i40e_dev_rx_queue_start(dev,
> > > > queue_idx);
> > > > > > > > > > >
> > > > > > > > > > > I don't think it is a good idea to start/stop queue
> > > > > > > > > > > inside queue_setup/queue_release.
> > > > > > > > > > > There is special API (queue_start/queue_stop) to do this.
> > > > > > > > > >
> > > > > > > > > > The idea is if dev already started, the queue is
> > > > > > > > > > supposed to be started
> > > > > > > > > automatically after queue_setup.
> > > > > > > > >
> > > > > > > > > Why is that?
> > > > > > > > Because device is already started, its like a running
> > > > > > > > conveyor belt, anything
> > > > > > > you put or replace on it just moves automatically.
> > > > > > >
> > > > > > > Why is that? :)
> > > > > > > You do break existing behavior.
> > > > > > > Right now it possible to do:
> > > > > > > queue_setup(); queue_setup();
> > > > > > > for the same queue.
> > > > > > > With you patch is not any more
> > > > > > Why not?
> > > > > > I think with my patch,
> > > > > > It assumes we can run below scenario on the same queue.
> > > > > > (note, I assume queue_stop/start has been moved from i40e to
> > > > > > ethedev layer already.) queue_setup + queue_setup + dev_start +
> > > > > > queue_setup
> > > > > > + queue_setup,
> > > > >
> > > > > Because you can't do queue_setup() on already started queue.
> > > > > So if you do start() inside setup() second setup() should fail.
> > > > NO, because in queue_release, it will call queue_stop And as I said
> > > > before, it's better to move to queue_stop in ether layer, it's not an issue.
> > > > >
> > > > > > queue_stop/start are handled inside queue_setup automatically
> > > > > > after
> > > > > dev_started?
> > > > >
> > > > > Again - I don't see any advantages to change existing API behavior
> > > > > and introduce implicit start/stop inside setup.
> > > > > It only introduce extra confusion for the users.
> > > > > So I still think we better keep existing behavior.
> > > > > Konstantin
> > > >
> > > > OK, let me try again :)
> > > > I think the patch try to keep deferred setup independent of deferred
> > > > start Deferred setup does not necessary to imply a deferred start.
> >
> > I don't understand what means 'deferred setup'.
> > We do have deferred_start for queue config, but it only used by dev_start().
>
> > Please, stop imply anything.
> > We have an API which is quite straightforward and does exactly what it
> > states.
> >
> > - queue_setup() - if queue is not started, then setup the queue.
> > - queue_start() - if queue is not started, then start the queue.
> > - queue_stop() - if queue is started, then stop the queue.
> > - dev_start() - in terms of queue behavior
> > for all configured queues; do
> > if queue->deferred_start != 0; then queue_start(queue);
> > done
> >
> > Let's keep it like that - nice and simple.
> Yes, let's keep it nice and simple at dev_ops layer,.
> But etherdev layer should be more friendly to application, we need imply something.
>
> For example, why we don't expose queue_release to ether layer,
> Why queue_setup imply a queue_release on a queue already be setup?
> Shouldn't it return fail to warn user, that a queue can't be reconfigure without release if first?
If you think queue_release() should be a public API - submit and RFC for that, then we can discuss it.
>
> I thinks it's the same pattern for why we have queue_stop / queue_start here.
Not really from my perspective.
setup/release - to setup/teardown internal queue structures.
start/stop - to start/stop RX/TX on that queues.
> if application want to setup a queue on a running device, of cause it want queue be started immediately
Some apps might, some might not.
Those who want to start the queue will call queue_start() - simple and straightforward.
> (if not? It can use deferred_start)
rte_eth_rxconf.deferred_start right now is used by one particular purpose:
uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
Now you are trying to overload it with extra meaning:
Do not start queue with rte_eth_dev_start()
if device is already started don't start the queue from the queue_setup().
Looks very confusing to me, plus what is probably worse there is now no consistent behavior
between queue_setup() invoked before dev_start() and queue_setup() invoked after dev_start.
I would expect queue_setup() in both cases to preserve current behavior or at least be as close as
possible to it.
Current queue_setup behaves like that:
queue_setup(queue)
{
if (device is started)
return with error;
if (queue is already setup)
queue_release(queue);
do_queue_setup(queue);
}
Preserving current behavior and introducing ability to setup queue for
already started device:
queue_setup(queue)
{
if (queue is not stopped)
return with error;
if (queue is already setup)
queue_release(queue);
do_queue_setup(queue);
}
What is proposed in your patch:
queue_setup(queue)
{
if (queue is already setup) {
/* via release */
if (if device is started AND queue is not stopped)
queue_stop(queue);
queue_release(queue);
}
do_queue_setup(queue);
if (device is started AND deferred_start for the queue is off)
queue_start(queue);
}
That looks quite different from current queue_setup() behavior plus
you introduce extra meaning for rte_eth_rxconf.deferred_start.
All of that in not obvious to the user way.
I still don't see any good reason to change existing queue_setup()
behavior in a such significant way.
So my vote for the proposed new behavior is NACK.
If you really strongly feel that current queue_setup() functionality has to be overloaded
(what you propose is really queue_stop_setup_start) - then I think it should be first stated clearly
within RFC and discussed with the community.
Same for overloading deferred_setup field.
> if application want to re_setup a queue on a running device, of cause it want queue can be stopped first.
> Why we set unnecessary barriers here?
>
> > No need to introduce such no-sense as 'deferred setup' or implicit stop in
> > start.
> > That just would add more mess and confusion.
> >
> > > > Which means
> > > > Queue_setup + dev_start = dev_start + queue_setup
> > > > Queue_setup(deferred) + dev_start + queue_start = dev_start +
> > > > queue_setup(deferred) + queue_start.
> > > > Queue_setup + dev_start + queue_setup(same queue) = dev_start +
> > > > queue_setup + queue_setup(same queue)
> > > >
> > >
> > > One mistake for the third item, It should be Queue_setup +
> > > Queue_setup(same queue) + dev_start = queue_setup + dev_start +
> > > queue_setup(same queue)
> > >
> > > > But not
> > > > Queue_setup + dev_start = dev_start+ queue_setup + queue_start
> > > > Queue_setup(deffered) + dev_start +qeueu_start = dev_start+
> > > > queue_setup (ignore deferred)+ queue_start Queue_setup + dev_start +
> > > > queue_setup(same queue) = dev_start + queue_setup + queue_stop +
> > > > queue_setup + queue_start.
> > >
> > > Third item should be
> > > Queue_setup + Queue_setup(same queue) + dev_start = queue_setup +
> > > dev_start + queue_stop + queue_setup(same queue) + queue_start
> > > >
> > > > I think option 1 have the pattern and easy to understand
> >
> > I don't think so.
> > From my perspective it just introduce more confusion to the user.
>
> I can't agree this, actually it's quite simple to use the APIs.
> User just need to remember, now, it's free to re-order queue_setup and dev_start, both call sequence reach the same destination.
> And if user does want to control queue start at specific time, just use deferred_start_flag and call queue_start explicitly as unusually,
> nothing changes
> Actually I agree with what Bruce said:
> "keeping existing behavior unless there is a compelling reason to change"
> The patch does try to keep consistent behavior from user's view.
It doesn't - that's the problem.
Konstantin
>
> Regards
> Qi
> >
> > > and option2 just add unnecessary queue_start/queue_stop
> >
> > Why unnecessary - if the user wants to start the queue - he/she calls
> > queue_start(), It is obvious, isn't it?
> >
> > > and make deferred_start redundant at some situation.
> >
> > Deferred start is used only by dev_start, that's what it was intended for.
> > Let it stay that way.
> > BTW, we can get rid of it and add to dev_start() as a parameter a list of
> > queues to start (not to start) - would be great.
> > But that's the matter of different discussion, I think.
> >
> > Konstantin
> >
> > > > >
> > > > > > .
> > > > > > > And I don't see an good reason to break existing behavior.
> > > > I don't think it break any exist behavior, again deferred setup does
> > > > not imply deferred start, because dev_start imply queue_start, and we
> > follow this logic.
> > > >
> > > > > > > What is the advantage of implicit call queue_start()
> > > > > > > implicitly from the queue_setup()/?
> > > > > > > Konstantin
> > > > > > >
> > > > > > > >
> > > > > > > > > Might be user doesn't want to start queue, might be he
> > > > > > > > > only wants to start it.
> > > > > > > > Use deferred_start_flag,
> > > > > > > > > Might be he would need to call queue_setup() once again
> > > > > > > > > later before starting it - based on some logic?
> > > > > > > > Dev_ops->queue_stop will be called first before
> > > > > > > > dev_ops->queue_setup in
> > > > > > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > If the user wants to setup and start the queue immediately
> > > > > > > > > he can always
> > > > > > > do:
> > > > > > > > >
> > > > > > > > > rc = queue_setup(...);
> > > > > > > > > if (rc == 0)
> > > > > > > > > queue_start(...);
> > > > > > > >
> > > > > > > > application no need to call queue_start explicitly in this case.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > We have a pretty well defined API here let's keep it like that.
> > > > > > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue setup
2018-03-20 13:18 ` Ananyev, Konstantin
@ 2018-03-21 1:53 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-21 1:53 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, March 20, 2018 9:19 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred queue
> setup
>
>
>
> > > > > > > > > > > > >
> > > > > > > > > > > > > Expose the deferred queue configuration
> > > > > > > > > > > > > capability and enhance
> > > > > > > > > > > > > i40e_dev_[rx|tx]_queue_[setup|release]
> > > > > > > > > > > > > to handle the situation when device already started.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > > > > > > > > > > > ---
> > > > > > > > > > > > > drivers/net/i40e/i40e_ethdev.c | 6 ++++
> > > > > > > > > > > > > drivers/net/i40e/i40e_rxtx.c | 62
> > > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++++--
> > > > > > > > > > > > > 2 files changed, 66 insertions(+), 2
> > > > > > > > > > > > > deletions(-)
> > > > > > > > > > > > >
> > > > > > > > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > > b/drivers/net/i40e/i40e_ethdev.c index
> > > > > > > > > > > > > 06b0f03a1..843a0c42a
> > > > > > > > > > > > > 100644
> > > > > > > > > > > > > --- a/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > > +++ b/drivers/net/i40e/i40e_ethdev.c
> > > > > > > > > > > > > @@ -3195,6 +3195,12 @@ i40e_dev_info_get(struct
> > > > > > > > > > > > > rte_eth_dev
> > > > > > > > > > *dev,
> > > > > > > > > > > > struct rte_eth_dev_info *dev_info)
> > > > > > > > > > > > > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > > > > > > > > > > > > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > > > > > > > > > > > > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > > > > > > > > > > > > + dev_info->deferred_queue_config_capa =
> > > > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_SETUP |
> > > > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_SETUP |
> > > > > > > > > > > > > + DEV_DEFERRED_RX_QUEUE_RELEASE |
> > > > > > > > > > > > > + DEV_DEFERRED_TX_QUEUE_RELEASE;
> > > > > > > > > > > > > +
> > > > > > > > > > > > > dev_info->hash_key_size =
> > > > > > (I40E_PFQF_HKEY_MAX_INDEX +
> > > > > > > > 1) *
> > > > > > > > > > > > > sizeof(uint32_t);
> > > > > > > > > > > > > dev_info->reta_size = pf->hash_lut_size; diff
> > > > > > > > > > > > > --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > > > > > > > > > 1217e5a61..e5f532cf7 100644
> > > > > > > > > > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > > > > > > > > > @@ -1712,6 +1712,7 @@
> > > i40e_dev_rx_queue_setup(struct
> > > > > > > > > > rte_eth_dev
> > > > > > > > > > > > *dev,
> > > > > > > > > > > > > uint16_t len, i;
> > > > > > > > > > > > > uint16_t reg_idx, base, bsf, tc_mapping;
> > > > > > > > > > > > > int q_offset, use_def_burst_func = 1;
> > > > > > > > > > > > > + int ret = 0;
> > > > > > > > > > > > >
> > > > > > > > > > > > > if (hw->mac.type == I40E_MAC_VF ||
> > > > > > > > > > > > > hw->mac.type
> > > ==
> > > > > > > > > > > > I40E_MAC_X722_VF) {
> > > > > > > > > > > > > vf =
> > > > > > > > I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > > > > > > > > > > > > @@ -1841,6 +1842,25 @@
> > > > > > > > > > > > > i40e_dev_rx_queue_setup(struct
> > > > > > > > > > rte_eth_dev
> > > > > > > > > > > > *dev,
> > > > > > > > > > > > > rxq->dcb_tc = i;
> > > > > > > > > > > > > }
> > > > > > > > > > > > >
> > > > > > > > > > > > > + if (dev->data->dev_started) {
> > > > > > > > > > > > > + ret = i40e_rx_queue_init(rxq);
> > > > > > > > > > > > > + if (ret != I40E_SUCCESS) {
> > > > > > > > > > > > > + PMD_DRV_LOG(ERR,
> > > > > > > > > > > > > + "Failed to do RX queue
> > > initialization");
> > > > > > > > > > > > > + return ret;
> > > > > > > > > > > > > + }
> > > > > > > > > > > > > + if (ad->rx_vec_allowed)
> > > > > > > > > > > >
> > > > > > > > > > > > Better to check what rx function is installed right now.
> > > > > > > > > > > Yes, it should be fixed, need to return fail if any
> > > > > > > > > > > conflict
> > > > > > > > > > > >
> > > > > > > > > > > > > + i40e_rxq_vec_setup(rxq);
> > > > > > > > > > > > > + if (!rxq->rx_deferred_start) {
> > > > > > > > > > > > > + ret = i40e_dev_rx_queue_start(dev,
> > > > > queue_idx);
> > > > > > > > > > > >
> > > > > > > > > > > > I don't think it is a good idea to start/stop
> > > > > > > > > > > > queue inside queue_setup/queue_release.
> > > > > > > > > > > > There is special API (queue_start/queue_stop) to do
> this.
> > > > > > > > > > >
> > > > > > > > > > > The idea is if dev already started, the queue is
> > > > > > > > > > > supposed to be started
> > > > > > > > > > automatically after queue_setup.
> > > > > > > > > >
> > > > > > > > > > Why is that?
> > > > > > > > > Because device is already started, its like a running
> > > > > > > > > conveyor belt, anything
> > > > > > > > you put or replace on it just moves automatically.
> > > > > > > >
> > > > > > > > Why is that? :)
> > > > > > > > You do break existing behavior.
> > > > > > > > Right now it possible to do:
> > > > > > > > queue_setup(); queue_setup(); for the same queue.
> > > > > > > > With you patch is not any more
> > > > > > > Why not?
> > > > > > > I think with my patch,
> > > > > > > It assumes we can run below scenario on the same queue.
> > > > > > > (note, I assume queue_stop/start has been moved from i40e to
> > > > > > > ethedev layer already.) queue_setup + queue_setup +
> > > > > > > dev_start + queue_setup
> > > > > > > + queue_setup,
> > > > > >
> > > > > > Because you can't do queue_setup() on already started queue.
> > > > > > So if you do start() inside setup() second setup() should fail.
> > > > > NO, because in queue_release, it will call queue_stop And as I
> > > > > said before, it's better to move to queue_stop in ether layer, it's not
> an issue.
> > > > > >
> > > > > > > queue_stop/start are handled inside queue_setup
> > > > > > > automatically after
> > > > > > dev_started?
> > > > > >
> > > > > > Again - I don't see any advantages to change existing API
> > > > > > behavior and introduce implicit start/stop inside setup.
> > > > > > It only introduce extra confusion for the users.
> > > > > > So I still think we better keep existing behavior.
> > > > > > Konstantin
> > > > >
> > > > > OK, let me try again :)
> > > > > I think the patch try to keep deferred setup independent of
> > > > > deferred start Deferred setup does not necessary to imply a deferred
> start.
> > >
> > > I don't understand what means 'deferred setup'.
> > > We do have deferred_start for queue config, but it only used by
> dev_start().
> >
> > > Please, stop imply anything.
> > > We have an API which is quite straightforward and does exactly what
> > > it states.
> > >
> > > - queue_setup() - if queue is not started, then setup the queue.
> > > - queue_start() - if queue is not started, then start the queue.
> > > - queue_stop() - if queue is started, then stop the queue.
> > > - dev_start() - in terms of queue behavior
> > > for all configured queues; do
> > > if queue->deferred_start != 0; then queue_start(queue);
> > > done
> > >
> > > Let's keep it like that - nice and simple.
> > Yes, let's keep it nice and simple at dev_ops layer,.
> > But etherdev layer should be more friendly to application, we need imply
> something.
> >
> > For example, why we don't expose queue_release to ether layer, Why
> > queue_setup imply a queue_release on a queue already be setup?
> > Shouldn't it return fail to warn user, that a queue can't be reconfigure
> without release if first?
>
> If you think queue_release() should be a public API - submit and RFC for that,
> then we can discuss it.
>
> >
> > I thinks it's the same pattern for why we have queue_stop / queue_start
> here.
>
> Not really from my perspective.
> setup/release - to setup/teardown internal queue structures.
> start/stop - to start/stop RX/TX on that queues.
>
> > if application want to setup a queue on a running device, of cause it
> > want queue be started immediately
>
> Some apps might, some might not.
> Those who want to start the queue will call queue_start() - simple and
> straightforward.
>
> > (if not? It can use deferred_start)
>
> rte_eth_rxconf.deferred_start right now is used by one particular purpose:
> uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start().
> */
>
> Now you are trying to overload it with extra meaning:
Yes, based on exist comment, deferred_start is overloaded.
> Do not start queue with rte_eth_dev_start() if device is already started don't
> start the queue from the queue_setup().
This is correct but also could be explained in a simple way.
deferred_start=0: queue will be started automatically when device is started.
deferred_start=1: queue can only be started by queue_start manually.
maybe "no_auto_start" could be a better name.
>
> Looks very confusing to me, plus what is probably worse there is now no
> consistent behavior between queue_setup() invoked before dev_start() and
> queue_setup() invoked after dev_start.
> I would expect queue_setup() in both cases to preserve current behavior or
> at least be as close as possible to it.
>
> Current queue_setup behaves like that:
>
> queue_setup(queue)
> {
> if (device is started)
> return with error;
> if (queue is already setup)
> queue_release(queue);
>
> do_queue_setup(queue);
> }
>
> Preserving current behavior and introducing ability to setup queue for
> already started device:
>
> queue_setup(queue)
> {
> if (queue is not stopped)
> return with error;
> if (queue is already setup)
> queue_release(queue);
>
> do_queue_setup(queue);
> }
>
> What is proposed in your patch:
>
> queue_setup(queue)
> {
> if (queue is already setup) {
> /* via release */
> if (if device is started AND queue is not stopped)
> queue_stop(queue);
>
> queue_release(queue);
> }
>
> do_queue_setup(queue);
>
> if (device is started AND deferred_start for the queue is off)
> queue_start(queue);
> }
>
> That looks quite different from current queue_setup() behavior plus you
> introduce extra meaning for rte_eth_rxconf.deferred_start.
> All of that in not obvious to the user way.
>
> I still don't see any good reason to change existing queue_setup() behavior in
> a such significant way.
> So my vote for the proposed new behavior is NACK.
>
> If you really strongly feel that current queue_setup() functionality has to be
> overloaded (what you propose is really queue_stop_setup_start) - then I
> think it should be first stated clearly within RFC and discussed with the
> community.
> Same for overloading deferred_setup field.
OK, I will consider this on a separate RFC patch, I don't think involve auto start/stop in the queue_setup context bring any trouble,
To me it simplify application's code, just like we don't need an additional queue_start call after queue_setup / dev_start,
since queue could be configure auto started at queue_setup.
Regards
Qi
>
> > if application want to re_setup a queue on a running device, of cause it
> want queue can be stopped first.
> > Why we set unnecessary barriers here?
> >
> > > No need to introduce such no-sense as 'deferred setup' or implicit
> > > stop in start.
> > > That just would add more mess and confusion.
> > >
> > > > > Which means
> > > > > Queue_setup + dev_start = dev_start + queue_setup
> > > > > Queue_setup(deferred) + dev_start + queue_start = dev_start +
> > > > > queue_setup(deferred) + queue_start.
> > > > > Queue_setup + dev_start + queue_setup(same queue) = dev_start +
> > > > > queue_setup + queue_setup(same queue)
> > > > >
> > > >
> > > > One mistake for the third item, It should be Queue_setup +
> > > > Queue_setup(same queue) + dev_start = queue_setup + dev_start +
> > > > queue_setup(same queue)
> > > >
> > > > > But not
> > > > > Queue_setup + dev_start = dev_start+ queue_setup + queue_start
> > > > > Queue_setup(deffered) + dev_start +qeueu_start = dev_start+
> > > > > queue_setup (ignore deferred)+ queue_start Queue_setup +
> > > > > dev_start + queue_setup(same queue) = dev_start + queue_setup +
> > > > > queue_stop + queue_setup + queue_start.
> > > >
> > > > Third item should be
> > > > Queue_setup + Queue_setup(same queue) + dev_start = queue_setup
> +
> > > > dev_start + queue_stop + queue_setup(same queue) + queue_start
> > > > >
> > > > > I think option 1 have the pattern and easy to understand
> > >
> > > I don't think so.
> > > From my perspective it just introduce more confusion to the user.
> >
> > I can't agree this, actually it's quite simple to use the APIs.
> > User just need to remember, now, it's free to re-order queue_setup and
> dev_start, both call sequence reach the same destination.
> > And if user does want to control queue start at specific time, just
> > use deferred_start_flag and call queue_start explicitly as unusually,
> > nothing changes Actually I agree with what Bruce said:
> > "keeping existing behavior unless there is a compelling reason to change"
> > The patch does try to keep consistent behavior from user's view.
>
> It doesn't - that's the problem.
> Konstantin
>
> >
> > Regards
> > Qi
> > >
> > > > and option2 just add unnecessary queue_start/queue_stop
> > >
> > > Why unnecessary - if the user wants to start the queue - he/she
> > > calls queue_start(), It is obvious, isn't it?
> > >
> > > > and make deferred_start redundant at some situation.
> > >
> > > Deferred start is used only by dev_start, that's what it was intended for.
> > > Let it stay that way.
> > > BTW, we can get rid of it and add to dev_start() as a parameter a
> > > list of queues to start (not to start) - would be great.
> > > But that's the matter of different discussion, I think.
> > >
> > > Konstantin
> > >
> > > > > >
> > > > > > > .
> > > > > > > > And I don't see an good reason to break existing behavior.
> > > > > I don't think it break any exist behavior, again deferred setup
> > > > > does not imply deferred start, because dev_start imply
> > > > > queue_start, and we
> > > follow this logic.
> > > > >
> > > > > > > > What is the advantage of implicit call queue_start()
> > > > > > > > implicitly from the queue_setup()/?
> > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > Might be user doesn't want to start queue, might be he
> > > > > > > > > > only wants to start it.
> > > > > > > > > Use deferred_start_flag,
> > > > > > > > > > Might be he would need to call queue_setup() once
> > > > > > > > > > again later before starting it - based on some logic?
> > > > > > > > > Dev_ops->queue_stop will be called first before
> > > > > > > > > dev_ops->queue_setup in
> > > > > > > > rte_eth_rx|tx_queue_setup, if a queue is running.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > If the user wants to setup and start the queue
> > > > > > > > > > immediately he can always
> > > > > > > > do:
> > > > > > > > > >
> > > > > > > > > > rc = queue_setup(...); if (rc == 0)
> > > > > > > > > > queue_start(...);
> > > > > > > > >
> > > > > > > > > application no need to call queue_start explicitly in this case.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > We have a pretty well defined API here let's keep it like that.
> > > > > > > > > > Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (4 preceding siblings ...)
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
@ 2018-03-21 7:28 ` Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 1/3] ether: support " Qi Zhang
` (2 more replies)
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
` (4 subsequent siblings)
10 siblings, 3 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-21 7:28 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
*** BLURB HERE ***
Qi Zhang (3):
ether: support runtime queue setup
app/testpmd: add command for queue setup
net/i40e: enable runtime queue setup
app/test-pmd/cmdline.c | 128 ++++++++++++++++++++++++++++
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 64 ++++++++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++---
lib/librte_ether/rte_ethdev.h | 7 ++
7 files changed, 236 insertions(+), 12 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] ether: support runtime queue setup
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
@ 2018-03-21 7:28 ` Qi Zhang
2018-03-25 19:47 ` Ananyev, Konstantin
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add command for " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-21 7:28 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support runtime queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v3:
- not overload deferred start
- rename deferred setup to runtime setup
v2:
- enhance comment
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..6983faa4e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_runtime_setup_capabilities:
+Queue runtime setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0590f0c10..343b1a6c0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 036153306..4e2088458 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Deferred setup rx queue */
+#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Deferred setup tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t runtime_queue_setup_capa;
+ /**< queues can be setup after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] app/testpmd: add command for queue setup
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 1/3] ether: support " Qi Zhang
@ 2018-03-21 7:28 ` Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-21 7:28 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 128 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 135 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d1dc1de6c..02dcad2b3 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)\n"
+ " setup a not started queue or re-setup a started queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,130 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+ uint64_t offloads;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_offloads =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads, UINT64);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ struct rte_eth_rxconf rxconf = port->rx_conf;
+
+ rxconf.offloads = res->offloads;
+ socket_id = rxring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &rxconf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ struct rte_eth_txconf txconf = port->tx_conf;
+
+ txconf.offloads = res->offloads;
+ socket_id = txring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &txconf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16399,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..e8e0b0a4e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a not started queue or re-setup a started queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime queue setup
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 1/3] ether: support " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-03-21 7:28 ` Qi Zhang
2018-03-25 19:46 ` Ananyev, Konstantin
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-21 7:28 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
drivers/net/i40e/i40e_ethdev.c | 4 +++
drivers/net/i40e/i40e_rxtx.c | 64 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 68 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 508b4171c..68960dcaa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3197,6 +3197,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->runtime_queue_setup_capa =
+ DEV_RUNTIME_RX_QUEUE_SETUP |
+ DEV_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..9eb009d63 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t len, i;
uint16_t reg_idx, base, bsf, tc_mapping;
int q_offset, use_def_burst_func = 1;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -1841,6 +1842,36 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_rx_queue_init(rxq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return ret;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed) {
+ if (i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup");
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx) {
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+
+ if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) >
+ buf_size) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required");
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ }
+ }
+
return 0;
}
@@ -1980,6 +2011,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct i40e_vsi *vsi;
struct i40e_pf *pf = NULL;
struct i40e_vf *vf = NULL;
@@ -1989,6 +2022,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t tx_rs_thresh, tx_free_thresh;
uint16_t reg_idx, i, base, bsf, tc_mapping;
int q_offset;
+ int ret = 0;
if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -2162,6 +2196,36 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ ret = i40e_tx_queue_init(txq);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return ret;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup");
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ }
+
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
+ }
+ PMD_DRV_LOG(ERR, "No-simple tx is required");
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ }
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime queue setup
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime " Qi Zhang
@ 2018-03-25 19:46 ` Ananyev, Konstantin
2018-03-26 8:49 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-25 19:46 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
Hi Qi,
>
> Expose the runtime queue configuration capability and enhance
> i40e_dev_[rx|tx]_queue_setup to handle the situation when
> device already started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> v3:
> - no queue start/stop in setup/release
> - return fail when required rx/tx function conflict with
> exist setup
>
> drivers/net/i40e/i40e_ethdev.c | 4 +++
> drivers/net/i40e/i40e_rxtx.c | 64 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 68 insertions(+)
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 508b4171c..68960dcaa 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -3197,6 +3197,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> DEV_TX_OFFLOAD_GRE_TNL_TSO |
> DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> + dev_info->runtime_queue_setup_capa =
> + DEV_RUNTIME_RX_QUEUE_SETUP |
> + DEV_RUNTIME_TX_QUEUE_SETUP;
> +
> dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> sizeof(uint32_t);
> dev_info->reta_size = pf->hash_lut_size;
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 1217e5a61..9eb009d63 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t len, i;
> uint16_t reg_idx, base, bsf, tc_mapping;
> int q_offset, use_def_burst_func = 1;
> + int ret = 0;
>
> if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
> vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> @@ -1841,6 +1842,36 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
> rxq->dcb_tc = i;
> }
>
> + if (dev->data->dev_started) {
> + ret = i40e_rx_queue_init(rxq);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to do RX queue initialization");
> + return ret;
> + }
We probably also have to do here:
if (use_def_burst_func != 0 && ad-> rx_bulk_alloc_allowed) {error;}
and we have to do that before we assign ad-> rx_bulk_alloc_allowed
(inside rx_queue_setup() few lines above).
> + /* check vector conflict */
> + if (ad->rx_vec_allowed) {
> + if (i40e_rxq_vec_setup(rxq)) {
> + PMD_DRV_LOG(ERR, "Failed vector rx setup");
> + i40e_dev_rx_queue_release(rxq);
> + return -EINVAL;
> + }
> + }
> + /* check scatterred conflict */
> + if (!dev->data->scattered_rx) {
> + uint16_t buf_size =
> + (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> + RTE_PKTMBUF_HEADROOM);
> +
> + if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) >
> + buf_size) {
> + PMD_DRV_LOG(ERR, "Scattered rx is required");
> + i40e_dev_rx_queue_release(rxq);
> + return -EINVAL;
> + }
> + }
> + }
> +
> return 0;
> }
>
> @@ -1980,6 +2011,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> const struct rte_eth_txconf *tx_conf)
> {
> struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct i40e_adapter *ad =
> + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> struct i40e_vsi *vsi;
> struct i40e_pf *pf = NULL;
> struct i40e_vf *vf = NULL;
> @@ -1989,6 +2022,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> uint16_t tx_rs_thresh, tx_free_thresh;
> uint16_t reg_idx, i, base, bsf, tc_mapping;
> int q_offset;
> + int ret = 0;
>
> if (hw->mac.type == I40E_MAC_VF || hw->mac.type == I40E_MAC_X722_VF) {
> vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> @@ -2162,6 +2196,36 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> txq->dcb_tc = i;
> }
>
> + if (dev->data->dev_started) {
> + ret = i40e_tx_queue_init(txq);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to do TX queue initialization");
> + return ret;
> + }
> +
> + /* check vector conflict */
> + if (ad->tx_vec_allowed) {
Same thing here:
i40e_dev_tx_queue_setup()->i40e_set_tx_function_flag()
can change both ad->tx_vec_allowed and tx_simple_allowed.
I think we have to do that check before device settings are affected.
> + if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
> + i40e_txq_vec_setup(txq)) {
> + PMD_DRV_LOG(ERR, "Failed vector tx setup");
> + i40e_dev_tx_queue_release(txq);
> + return -EINVAL;
> + }
> + }
> +
> + /* check simple tx conflict */
> + if (ad->tx_simple_allowed) {
> + if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
> + I40E_SIMPLE_FLAGS) ||
> + (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
> + }
> + PMD_DRV_LOG(ERR, "No-simple tx is required");
> + i40e_dev_tx_queue_release(txq);
> + return -EINVAL;
> + }
> + }
> +
As a nit - probably worth to move functionality under if (dev->data->dev_started) {...}
into a separate function for both TX and RX.
Konstantin
> return 0;
> }
>
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ether: support runtime queue setup
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 1/3] ether: support " Qi Zhang
@ 2018-03-25 19:47 ` Ananyev, Konstantin
0 siblings, 0 replies; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-03-25 19:47 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Wednesday, March 21, 2018 7:28 AM
> To: thomas@monjalon.net; Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v3 1/3] ether: support runtime queue setup
>
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support runtime queue configuraiton,
> then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> v3:
> - not overload deferred start
> - rename deferred setup to runtime setup
>
> v2:
> - enhance comment
>
> doc/guides/nics/features.rst | 8 ++++++++
> lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> lib/librte_ether/rte_ethdev.h | 7 +++++++
> 3 files changed, 33 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 1b4fb979f..6983faa4e 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -892,7 +892,15 @@ Documentation describes performance values.
>
> See ``dpdk.org/doc/perf/*``.
>
> +.. _nic_features_queue_runtime_setup_capabilities:
>
> +Queue runtime setup capabilities
> +---------------------------------
> +
> +Supports queue setup / release after device started.
> +
> +* **[provides] rte_eth_dev_info**:
> ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
>
> .. _nic_features_other:
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0590f0c10..343b1a6c0 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
>
> @@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.runtime_queue_setup_capa &
> + DEV_RUNTIME_RX_QUEUE_SETUP))
> + return -EBUSY;
> +
> + if (dev->data->rx_queue_state[rx_queue_id] !=
> + RTE_ETH_QUEUE_STATE_STOPPED)
> + return -EBUSY;
> +
> rxq = dev->data->rx_queues;
> if (rxq[rx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
>
> @@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.runtime_queue_setup_capa &
> + DEV_RUNTIME_TX_QUEUE_SETUP))
> + return -EBUSY;
> +
> + if (dev->data->rx_queue_state[tx_queue_id] !=
> + RTE_ETH_QUEUE_STATE_STOPPED)
> + return -EBUSY;
> +
> txq = dev->data->tx_queues;
> if (txq[tx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 036153306..4e2088458 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -981,6 +981,11 @@ struct rte_eth_conf {
> */
> #define DEV_TX_OFFLOAD_SECURITY 0x00020000
>
> +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
> +/**< Deferred setup rx queue */
> +#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
> +/**< Deferred setup tx queue */
> +
> /*
> * If new Tx offload capabilities are defined, they also must be
> * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> @@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
> /** Configured number of rx/tx queues */
> uint16_t nb_rx_queues; /**< Number of RX queues. */
> uint16_t nb_tx_queues; /**< Number of TX queues. */
> + uint64_t runtime_queue_setup_capa;
> + /**< queues can be setup after dev_start (DEV_DEFERRED_). */
> };
>
> /**
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime queue setup
2018-03-25 19:46 ` Ananyev, Konstantin
@ 2018-03-26 8:49 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-03-26 8:49 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, March 26, 2018 3:46 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [PATCH v3 3/3] net/i40e: enable runtime queue setup
>
> Hi Qi,
>
> >
> > Expose the runtime queue configuration capability and enhance
> > i40e_dev_[rx|tx]_queue_setup to handle the situation when device
> > already started.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > v3:
> > - no queue start/stop in setup/release
> > - return fail when required rx/tx function conflict with
> > exist setup
> >
> > drivers/net/i40e/i40e_ethdev.c | 4 +++
> > drivers/net/i40e/i40e_rxtx.c | 64
> ++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 68 insertions(+)
> >
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> > b/drivers/net/i40e/i40e_ethdev.c index 508b4171c..68960dcaa 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -3197,6 +3197,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> > DEV_TX_OFFLOAD_GRE_TNL_TSO |
> > DEV_TX_OFFLOAD_IPIP_TNL_TSO |
> > DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> > + dev_info->runtime_queue_setup_capa =
> > + DEV_RUNTIME_RX_QUEUE_SETUP |
> > + DEV_RUNTIME_TX_QUEUE_SETUP;
> > +
> > dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
> > sizeof(uint32_t);
> > dev_info->reta_size = pf->hash_lut_size; diff --git
> > a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> > 1217e5a61..9eb009d63 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -1712,6 +1712,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> > uint16_t len, i;
> > uint16_t reg_idx, base, bsf, tc_mapping;
> > int q_offset, use_def_burst_func = 1;
> > + int ret = 0;
> >
> > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> I40E_MAC_X722_VF) {
> > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > @@ -1841,6 +1842,36 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> > rxq->dcb_tc = i;
> > }
> >
> > + if (dev->data->dev_started) {
> > + ret = i40e_rx_queue_init(rxq);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to do RX queue initialization");
> > + return ret;
> > + }
>
> We probably also have to do here:
>
> if (use_def_burst_func != 0 && ad-> rx_bulk_alloc_allowed) {error;}
>
> and we have to do that before we assign ad-> rx_bulk_alloc_allowed (inside
> rx_queue_setup() few lines above).
Got your point and agree with all following comments.
Thanks
Qi
>
>
> > + /* check vector conflict */
> > + if (ad->rx_vec_allowed) {
> > + if (i40e_rxq_vec_setup(rxq)) {
> > + PMD_DRV_LOG(ERR, "Failed vector rx setup");
> > + i40e_dev_rx_queue_release(rxq);
> > + return -EINVAL;
> > + }
> > + }
> > + /* check scatterred conflict */
> > + if (!dev->data->scattered_rx) {
> > + uint16_t buf_size =
> > + (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> > + RTE_PKTMBUF_HEADROOM);
> > +
> > + if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) >
> > + buf_size) {
> > + PMD_DRV_LOG(ERR, "Scattered rx is required");
> > + i40e_dev_rx_queue_release(rxq);
> > + return -EINVAL;
> > + }
> > + }
> > + }
> > +
> > return 0;
> > }
> >
> > @@ -1980,6 +2011,8 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > const struct rte_eth_txconf *tx_conf) {
> > struct i40e_hw *hw =
> I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > + struct i40e_adapter *ad =
> > + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > struct i40e_vsi *vsi;
> > struct i40e_pf *pf = NULL;
> > struct i40e_vf *vf = NULL;
> > @@ -1989,6 +2022,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > uint16_t tx_rs_thresh, tx_free_thresh;
> > uint16_t reg_idx, i, base, bsf, tc_mapping;
> > int q_offset;
> > + int ret = 0;
> >
> > if (hw->mac.type == I40E_MAC_VF || hw->mac.type ==
> I40E_MAC_X722_VF) {
> > vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> > @@ -2162,6 +2196,36 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> > txq->dcb_tc = i;
> > }
> >
> > + if (dev->data->dev_started) {
> > + ret = i40e_tx_queue_init(txq);
> > + if (ret != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to do TX queue initialization");
> > + return ret;
> > + }
> > +
> > + /* check vector conflict */
> > + if (ad->tx_vec_allowed) {
>
> Same thing here:
> i40e_dev_tx_queue_setup()->i40e_set_tx_function_flag()
> can change both ad->tx_vec_allowed and tx_simple_allowed.
> I think we have to do that check before device settings are affected.
>
> > + if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
> > + i40e_txq_vec_setup(txq)) {
> > + PMD_DRV_LOG(ERR, "Failed vector tx setup");
> > + i40e_dev_tx_queue_release(txq);
> > + return -EINVAL;
> > + }
> > + }
> > +
> > + /* check simple tx conflict */
> > + if (ad->tx_simple_allowed) {
> > + if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
> > + I40E_SIMPLE_FLAGS) ||
> > + (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
> > + }
> > + PMD_DRV_LOG(ERR, "No-simple tx is required");
> > + i40e_dev_tx_queue_release(txq);
> > + return -EINVAL;
> > + }
> > + }
> > +
>
> As a nit - probably worth to move functionality under if
> (dev->data->dev_started) {...} into a separate function for both TX and RX.
> Konstantin
>
>
> > return 0;
> > }
> >
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (5 preceding siblings ...)
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
@ 2018-03-26 8:59 ` Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 1/3] ether: support " Qi Zhang
` (2 more replies)
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
` (3 subsequent siblings)
10 siblings, 3 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-26 8:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
v4:
- fix i40e rx/tx funciton conflict handle.
- no need conflict check for first rx/tx queue at runtime setup.
- fix missing offload paramter in testpmd cmdline.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
Qi Zhang (3):
ether: support runtime queue setup
app/testpmd: add command for queue setup
net/i40e: enable runtime queue setup
app/test-pmd/cmdline.c | 129 ++++++++++++++++++
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 +
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 195 ++++++++++++++++++++++++----
lib/librte_ether/rte_ethdev.c | 30 +++--
lib/librte_ether/rte_ethdev.h | 7 +
7 files changed, 345 insertions(+), 35 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] ether: support runtime queue setup
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
@ 2018-03-26 8:59 ` Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-03-26 8:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support runtime queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v3:
- not overload deferred start
- rename deferred setup to runtime setup
v2:
- enhance comment
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..6983faa4e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_runtime_setup_capabilities:
+Queue runtime setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0590f0c10..343b1a6c0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 036153306..4e2088458 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Deferred setup rx queue */
+#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Deferred setup tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t runtime_queue_setup_capa;
+ /**< queues can be setup after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for queue setup
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 1/3] ether: support " Qi Zhang
@ 2018-03-26 8:59 ` Qi Zhang
2018-04-01 12:21 ` Ananyev, Konstantin
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-26 8:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 136 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d1dc1de6c..1b0bbd9f4 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)\n"
+ " setup a not started queue or re-setup a started queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,131 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+ uint64_t offloads;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_offloads =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads, UINT64);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ struct rte_eth_rxconf rxconf = port->rx_conf;
+
+ rxconf.offloads = res->offloads;
+ socket_id = rxring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &rxconf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ struct rte_eth_txconf txconf = port->tx_conf;
+
+ txconf.offloads = res->offloads;
+ socket_id = txring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &txconf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size> <offloads>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ (void *)&cmd_queue_setup_offloads,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16400,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..e8e0b0a4e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a not started queue or re-setup a started queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime queue setup
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 1/3] ether: support " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-03-26 8:59 ` Qi Zhang
2018-04-01 12:18 ` Ananyev, Konstantin
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-03-26 8:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v4:
- fix rx/tx conflict check.
- no need conflict check for first rx/tx queue at runtime setup.
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 195 ++++++++++++++++++++++++++++++++++++-----
2 files changed, 176 insertions(+), 23 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 508b4171c..68960dcaa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3197,6 +3197,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->runtime_queue_setup_capa =
+ DEV_RUNTIME_RX_QUEUE_SETUP |
+ DEV_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..101c20ba0 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1692,6 +1692,75 @@ i40e_dev_supported_ptypes_get(struct rte_eth_dev *dev)
return NULL;
}
+static int
+i40e_dev_first_rx_queue(struct rte_eth_dev *dev,
+ uint16_t queue_idx)
+{
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ if (i != queue_idx && dev->data->rx_queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_rx_queue *rxq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ int use_scattered_rx =
+ ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) ?
+ 1 : 0;
+
+ if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_rx_queue(dev, rxq->queue_id)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_rx_function.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->rx_vec_allowed = true;
+ dev->data->scattered_rx = use_scattered_rx;
+ if (use_def_burst_func)
+ ad->rx_bulk_alloc_allowed = false;
+ i40e_set_rx_function(dev);
+ return 0;
+ }
+
+ /* check bulk alloc conflict */
+ if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
+ PMD_DRV_LOG(ERR, "Can't use default burst.");
+ return -EINVAL;
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx && use_scattered_rx) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required.");
+ return -EINVAL;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1808,25 +1877,6 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_rx_queue(rxq);
rxq->q_set = TRUE;
- dev->data->rx_queues[queue_idx] = rxq;
-
- use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq);
-
- if (!use_def_burst_func) {
-#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "satisfied. Rx Burst Bulk Alloc function will be "
- "used on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
-#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
- } else {
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "not satisfied, Scattered Rx is requested, "
- "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
- "not enabled on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
- ad->rx_bulk_alloc_allowed = false;
- }
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -1841,6 +1891,34 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_rx_queue_setup_runtime(dev, rxq)) {
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ } else {
+ use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+ }
+
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
@@ -1972,6 +2050,67 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
return RTE_ETH_TX_DESC_FULL;
}
+static int
+i40e_dev_first_tx_queue(struct rte_eth_dev *dev,
+ uint16_t queue_idx)
+{
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ if (i != queue_idx && dev->data->rx_queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_tx_queue *txq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (i40e_tx_queue_init(txq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_tx_queue(dev, txq->queue_id)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_tx_function.
+ */
+ ad->tx_simple_allowed = true;
+ ad->tx_vec_allowed = true;
+ i40e_set_tx_function_flag(dev, txq);
+ i40e_set_tx_function(dev);
+ return 0;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup.");
+ return -EINVAL;
+ }
+ }
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
+ }
+ PMD_DRV_LOG(ERR, "No-simple tx is required.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -2144,10 +2283,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_tx_queue(txq);
txq->q_set = TRUE;
- dev->data->tx_queues[queue_idx] = txq;
-
- /* Use a simple TX queue without offloads or multi segs if possible */
- i40e_set_tx_function_flag(dev, txq);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -2162,6 +2297,20 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_tx_queue_setup_runtime(dev, txq)) {
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ } else {
+ /**
+ * Use a simple TX queue without offloads or
+ * multi segs if possible
+ */
+ i40e_set_tx_function_flag(dev, txq);
+ }
+ dev->data->tx_queues[queue_idx] = txq;
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime queue setup
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime " Qi Zhang
@ 2018-04-01 12:18 ` Ananyev, Konstantin
2018-04-02 2:20 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-04-01 12:18 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> +
> +static int
> +i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
> + struct i40e_rx_queue *rxq)
> +{
> + struct i40e_adapter *ad =
> + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> + int use_def_burst_func =
> + check_rx_burst_bulk_alloc_preconditions(rxq);
> + uint16_t buf_size =
> + (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> + RTE_PKTMBUF_HEADROOM);
> + int use_scattered_rx =
> + ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) ?
> + 1 : 0;
As a nit:
int use_scattered_rx = ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size);
would do exactly the same.
> +
> + if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR,
> + "Failed to do RX queue initialization");
> + return -EINVAL;
> + }
> +
> + if (i40e_dev_first_rx_queue(dev, rxq->queue_id)) {
> + /**
> + * If it is the first queue to setup,
> + * set all flags to default and call
> + * i40e_set_rx_function.
> + */
> + ad->rx_bulk_alloc_allowed = true;
> + ad->rx_vec_allowed = true;
> + dev->data->scattered_rx = use_scattered_rx;
> + if (use_def_burst_func)
> + ad->rx_bulk_alloc_allowed = false;
> + i40e_set_rx_function(dev);
> + return 0;
> + }
> +
> + /* check bulk alloc conflict */
> + if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
> + PMD_DRV_LOG(ERR, "Can't use default burst.");
> + return -EINVAL;
> + }
> + /* check scatterred conflict */
> + if (!dev->data->scattered_rx && use_scattered_rx) {
> + PMD_DRV_LOG(ERR, "Scattered rx is required.");
> + return -EINVAL;
> + }
> + /* check vector conflict */
> + if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
> + PMD_DRV_LOG(ERR, "Failed vector rx setup.");
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
...
> +
> +static int
> +i40e_dev_first_tx_queue(struct rte_eth_dev *dev,
> + uint16_t queue_idx)
> +{
> + uint16_t i;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> + if (i != queue_idx && dev->data->rx_queues[i])
> + return 0;
> + }
> +
> + return 1;
> +}
I suppose it should be tx_qeueues and nb_tx_queues here.
BTW you probably can merge i40e_dev_first_tx_queue() and i40e_dev_first_rx_queue()
into one function.
Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for queue setup
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-04-01 12:21 ` Ananyev, Konstantin
0 siblings, 0 replies; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-04-01 12:21 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> +queue setup
> +~~~~~~~~~~~
> +
> +Setup a not started queue or re-setup a started queue::
> +
> + testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
As a nit - probably need to rephrase here - it is not possible to setup started queue.
Konstantin
> +
> Port Functions
> --------------
>
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime queue setup
2018-04-01 12:18 ` Ananyev, Konstantin
@ 2018-04-02 2:20 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-02 2:20 UTC (permalink / raw)
To: Ananyev, Konstantin, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Sunday, April 1, 2018 8:18 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [PATCH v4 3/3] net/i40e: enable runtime queue setup
>
>
>
> > +
> > +static int
> > +i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
> > + struct i40e_rx_queue *rxq)
> > +{
> > + struct i40e_adapter *ad =
> > + I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> > + int use_def_burst_func =
> > + check_rx_burst_bulk_alloc_preconditions(rxq);
> > + uint16_t buf_size =
> > + (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> > + RTE_PKTMBUF_HEADROOM);
> > + int use_scattered_rx =
> > + ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) ?
> > + 1 : 0;
>
> As a nit:
> int use_scattered_rx = ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) >
> buf_size); would do exactly the same.
>
> > +
> > + if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
> > + PMD_DRV_LOG(ERR,
> > + "Failed to do RX queue initialization");
> > + return -EINVAL;
> > + }
> > +
> > + if (i40e_dev_first_rx_queue(dev, rxq->queue_id)) {
> > + /**
> > + * If it is the first queue to setup,
> > + * set all flags to default and call
> > + * i40e_set_rx_function.
> > + */
> > + ad->rx_bulk_alloc_allowed = true;
> > + ad->rx_vec_allowed = true;
> > + dev->data->scattered_rx = use_scattered_rx;
> > + if (use_def_burst_func)
> > + ad->rx_bulk_alloc_allowed = false;
> > + i40e_set_rx_function(dev);
> > + return 0;
> > + }
> > +
> > + /* check bulk alloc conflict */
> > + if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
> > + PMD_DRV_LOG(ERR, "Can't use default burst.");
> > + return -EINVAL;
> > + }
> > + /* check scatterred conflict */
> > + if (!dev->data->scattered_rx && use_scattered_rx) {
> > + PMD_DRV_LOG(ERR, "Scattered rx is required.");
> > + return -EINVAL;
> > + }
> > + /* check vector conflict */
> > + if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
> > + PMD_DRV_LOG(ERR, "Failed vector rx setup.");
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
>
> ...
>
> > +
> > +static int
> > +i40e_dev_first_tx_queue(struct rte_eth_dev *dev,
> > + uint16_t queue_idx)
> > +{
> > + uint16_t i;
> > +
> > + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > + if (i != queue_idx && dev->data->rx_queues[i])
> > + return 0;
> > + }
> > +
> > + return 1;
> > +}
>
> I suppose it should be tx_qeueues and nb_tx_queues here.
> BTW you probably can merge i40e_dev_first_tx_queue() and
> i40e_dev_first_rx_queue() into one function.
Thanks for capture this, will fix.
> Konstantin
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (6 preceding siblings ...)
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
@ 2018-04-02 2:59 ` Qi Zhang
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 1/3] ether: support " Qi Zhang
` (3 more replies)
2018-04-08 2:42 ` Qi Zhang
` (2 subsequent siblings)
10 siblings, 4 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-02 2:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
v5:
- fix first tx queue check in i40e.
v4:
- fix i40e rx/tx funciton conflict handle.
- no need conflict check for first rx/tx queue at runtime setup.
- fix missing offload paramter in testpmd cmdline.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Qi Zhang (3):
ether: support runtime queue setup
app/testpmd: add command for queue setup
net/i40e: enable runtime queue setup
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 ++++++++++++++++++++++++----
lib/librte_ether/rte_ethdev.c | 30 +++--
lib/librte_ether/rte_ethdev.h | 7 ++
7 files changed, 333 insertions(+), 35 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
@ 2018-04-02 2:59 ` Qi Zhang
2018-04-06 19:42 ` Rosen, Rami
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for " Qi Zhang
` (2 subsequent siblings)
3 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-02 2:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support runtime queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..6983faa4e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_runtime_setup_capabilities:
+Queue runtime setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0590f0c10..343b1a6c0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 036153306..4e2088458 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Deferred setup rx queue */
+#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Deferred setup tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t runtime_queue_setup_capa;
+ /**< queues can be setup after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 1/3] ether: support " Qi Zhang
@ 2018-04-02 2:59 ` Qi Zhang
2018-04-07 15:49 ` Rosen, Rami
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 3/3] net/i40e: enable runtime " Qi Zhang
2018-04-02 23:36 ` [dpdk-dev] [PATCH v5 0/3] " Ananyev, Konstantin
3 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-02 2:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v5:
- fix command description.
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 136 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d1dc1de6c..449c7c634 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)\n"
+ " setup a rx or tx queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,131 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+ uint64_t offloads;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_offloads =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads, UINT64);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ struct rte_eth_rxconf rxconf = port->rx_conf;
+
+ rxconf.offloads = res->offloads;
+ socket_id = rxring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &rxconf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ struct rte_eth_txconf txconf = port->tx_conf;
+
+ txconf.offloads = res->offloads;
+ socket_id = txring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &txconf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size> <offloads>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ (void *)&cmd_queue_setup_offloads,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16400,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..e8e0b0a4e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a rx or tx queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] net/i40e: enable runtime queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 1/3] ether: support " Qi Zhang
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-04-02 2:59 ` Qi Zhang
2018-04-02 23:36 ` [dpdk-dev] [PATCH v5 0/3] " Ananyev, Konstantin
3 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-02 2:59 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v5:
- fix first tx queue check.
v4:
- fix rx/tx conflict check.
- no need conflict check for first rx/tx queue at runtime setup.
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 +++++++++++++++++++++++++++++++++++------
2 files changed, 164 insertions(+), 23 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 508b4171c..68960dcaa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3197,6 +3197,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->runtime_queue_setup_capa =
+ DEV_RUNTIME_RX_QUEUE_SETUP |
+ DEV_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..0115ae731 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1692,6 +1692,75 @@ i40e_dev_supported_ptypes_get(struct rte_eth_dev *dev)
return NULL;
}
+static int
+i40e_dev_first_queue(uint16_t idx, void **queues, int num)
+{
+ uint16_t i;
+
+ for (i = 0; i < num; i++) {
+ if (i != idx && queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_rx_queue *rxq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ int use_scattered_rx =
+ ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size);
+
+ if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(rxq->queue_id,
+ dev->data->rx_queues,
+ dev->data->nb_rx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_rx_function.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->rx_vec_allowed = true;
+ dev->data->scattered_rx = use_scattered_rx;
+ if (use_def_burst_func)
+ ad->rx_bulk_alloc_allowed = false;
+ i40e_set_rx_function(dev);
+ return 0;
+ }
+
+ /* check bulk alloc conflict */
+ if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
+ PMD_DRV_LOG(ERR, "Can't use default burst.");
+ return -EINVAL;
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx && use_scattered_rx) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required.");
+ return -EINVAL;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1808,25 +1877,6 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_rx_queue(rxq);
rxq->q_set = TRUE;
- dev->data->rx_queues[queue_idx] = rxq;
-
- use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq);
-
- if (!use_def_burst_func) {
-#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "satisfied. Rx Burst Bulk Alloc function will be "
- "used on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
-#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
- } else {
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "not satisfied, Scattered Rx is requested, "
- "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
- "not enabled on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
- ad->rx_bulk_alloc_allowed = false;
- }
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -1841,6 +1891,34 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_rx_queue_setup_runtime(dev, rxq)) {
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ } else {
+ use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+ }
+
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
@@ -1972,6 +2050,55 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
return RTE_ETH_TX_DESC_FULL;
}
+static int
+i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_tx_queue *txq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (i40e_tx_queue_init(txq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(txq->queue_id,
+ dev->data->tx_queues,
+ dev->data->nb_tx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_tx_function.
+ */
+ ad->tx_simple_allowed = true;
+ ad->tx_vec_allowed = true;
+ i40e_set_tx_function_flag(dev, txq);
+ i40e_set_tx_function(dev);
+ return 0;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup.");
+ return -EINVAL;
+ }
+ }
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
+ }
+ PMD_DRV_LOG(ERR, "No-simple tx is required.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -2144,10 +2271,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_tx_queue(txq);
txq->q_set = TRUE;
- dev->data->tx_queues[queue_idx] = txq;
-
- /* Use a simple TX queue without offloads or multi segs if possible */
- i40e_set_tx_function_flag(dev, txq);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -2162,6 +2285,20 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_tx_queue_setup_runtime(dev, txq)) {
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ } else {
+ /**
+ * Use a simple TX queue without offloads or
+ * multi segs if possible
+ */
+ i40e_set_tx_function_flag(dev, txq);
+ }
+ dev->data->tx_queues[queue_idx] = txq;
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] runtime queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
` (2 preceding siblings ...)
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 3/3] net/i40e: enable runtime " Qi Zhang
@ 2018-04-02 23:36 ` Ananyev, Konstantin
3 siblings, 0 replies; 95+ messages in thread
From: Ananyev, Konstantin @ 2018-04-02 23:36 UTC (permalink / raw)
To: Zhang, Qi Z, thomas; +Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Monday, April 2, 2018 4:00 AM
> To: thomas@monjalon.net; Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v5 0/3] runtime queue setup
>
> v5:
> - fix first tx queue check in i40e.
>
> v4:
> - fix i40e rx/tx funciton conflict handle.
> - no need conflict check for first rx/tx queue at runtime setup.
> - fix missing offload paramter in testpmd cmdline.
>
> v3:
> - not overload deferred start.
> - rename deferred setup to runtime setup.
> - remove unecessary testpmd parameters (patch 2/4 of v2)
> - add offload support to testpmd queue setup command line
> - i40e fix: return fail when required rx/tx function conflict with
> exist setup.
>
> v2:
> - enhance comment in rte_ethdev.h
>
> According to exist implementation,rte_eth_[rx|tx]_queue_setup will
> always return fail if device is already started(rte_eth_dev_start).
>
> This can't satisfied the usage when application want to deferred setup
> part of the queues while keep traffic running on those queues already
> be setup.
>
> example:
> rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
> rte_eth_rx_queue_setup(idx = 0 ...)
> rte_eth_rx_queue_setup(idx = 0 ...)
> rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
> rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
>
> Basically this is not a general hardware limitation, because for NIC
> like i40e, ixgbe, it is not necessary to stop the whole device before
> configure a fresh queue or reconfigure an exist queue with no traffic
> on it.
>
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support deferred queue configuraiton,
> then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
>
>
> Qi Zhang (3):
> ether: support runtime queue setup
> app/testpmd: add command for queue setup
> net/i40e: enable runtime queue setup
>
> app/test-pmd/cmdline.c | 129 ++++++++++++++++++++
> doc/guides/nics/features.rst | 8 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
> drivers/net/i40e/i40e_ethdev.c | 4 +
> drivers/net/i40e/i40e_rxtx.c | 183 ++++++++++++++++++++++++----
> lib/librte_ether/rte_ethdev.c | 30 +++--
> lib/librte_ether/rte_ethdev.h | 7 ++
> 7 files changed, 333 insertions(+), 35 deletions(-)
>
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 1/3] ether: support " Qi Zhang
@ 2018-04-06 19:42 ` Rosen, Rami
2018-04-08 2:20 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Rosen, Rami @ 2018-04-06 19:42 UTC (permalink / raw)
To: Zhang, Qi Z, thomas, Ananyev, Konstantin
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
Hi Qi,
Thanks for these patches.
See my comment below.
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
Sent: Monday, April 02, 2018 06:00
To: thomas@monjalon.net; Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
Subject: [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
The patch let etherdev driver expose the capability flag through rte_eth_dev_info_get when it support runtime queue configuraiton, then base on the flag rte_eth_[rx|tx]_queue_setup could decide continue to setup the queue or just return fail when device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/features.rst | 8 ++++++++ lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------ lib/librte_ether/rte_ethdev.h | 7 +++++++
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 1b4fb979f..6983faa4e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_runtime_setup_capabilities:
+Queue runtime setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0590f0c10..343b1a6c0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
[Rami Rosen] Shouldn't it be here: dev->data->tx_queue_state[...] instead of:
dev->data->rx_queue_state[...] ? we are dealing with the TX queue.
+ if (dev->data->rx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 036153306..4e2088458 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001 /**< Deferred setup rx
+queue */ #define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002 /**< Deferred
+setup tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t runtime_queue_setup_capa;
+ /**< queues can be setup after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue setup
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-04-07 15:49 ` Rosen, Rami
2018-04-08 2:22 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Rosen, Rami @ 2018-04-07 15:49 UTC (permalink / raw)
To: Zhang, Qi Z, thomas, Ananyev, Konstantin
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo, Zhang, Qi Z
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
Sent: Monday, April 02, 2018 06:00
To: thomas@monjalon.net; Ananyev, Konstantin <konstantin.ananyev@intel.com>
Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
Subject: [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue setup
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v5:
- fix command description.
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 136 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index d1dc1de6c..449c7c634 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)\n"
+ " setup a rx or tx queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,131 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */ struct
+cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+ uint64_t offloads;
+};
+
+/* Common CLI fields for queue setup */ cmdline_parse_token_string_t
+cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue,
+"queue"); cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup,
+"setup"); cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx,
+"rx#tx"); cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx,
+UINT16); cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size,
+UINT16); cmdline_parse_token_num_t cmd_queue_setup_offloads =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads,
+UINT64);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
[Rami Rosen] Nitpick: shouldn't it be: must > rx_free_thresh: ?
+ printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ struct rte_eth_rxconf rxconf = port->rx_conf;
+
+ rxconf.offloads = res->offloads;
+ socket_id = rxring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &rxconf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ struct rte_eth_txconf txconf = port->tx_conf;
+
+ txconf.offloads = res->offloads;
+ socket_id = txring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &txconf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size> <offloads>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ (void *)&cmd_queue_setup_offloads,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16400,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..e8e0b0a4e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a rx or tx queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
+ (offloads)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
2018-04-06 19:42 ` Rosen, Rami
@ 2018-04-08 2:20 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-08 2:20 UTC (permalink / raw)
To: Rosen, Rami, thomas, Ananyev, Konstantin
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Rosen, Rami
> Sent: Saturday, April 7, 2018 3:42 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
>
> Hi Qi,
> Thanks for these patches.
> See my comment below.
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> Sent: Monday, April 02, 2018 06:00
> To: thomas@monjalon.net; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v5 1/3] ether: support runtime queue setup
>
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support runtime queue configuraiton, then base
> on the flag rte_eth_[rx|tx]_queue_setup could decide continue to setup the
> queue or just return fail when device already started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> doc/guides/nics/features.rst | 8 ++++++++ lib/librte_ether/rte_ethdev.c
> | 30 ++++++++++++++++++------------ lib/librte_ether/rte_ethdev.h | 7
> +++++++
> 3 files changed, 33 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index
> 1b4fb979f..6983faa4e 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -892,7 +892,15 @@ Documentation describes performance values.
>
> See ``dpdk.org/doc/perf/*``.
>
> +.. _nic_features_queue_runtime_setup_capabilities:
>
> +Queue runtime setup capabilities
> +---------------------------------
> +
> +Supports queue setup / release after device started.
> +
> +* **[provides] rte_eth_dev_info**:
> ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTI
> ME_TX_QUEUE_SETUP``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
>
> .. _nic_features_other:
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index
> 0590f0c10..343b1a6c0 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup,
> -ENOTSUP);
>
> @@ -1474,6 +1468,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.runtime_queue_setup_capa &
> + DEV_RUNTIME_RX_QUEUE_SETUP))
> + return -EBUSY;
> +
> + if (dev->data->rx_queue_state[rx_queue_id] !=
> + RTE_ETH_QUEUE_STATE_STOPPED)
> + return -EBUSY;
> +
> rxq = dev->data->rx_queues;
> if (rxq[rx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
> @@ -1573,12 +1576,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t
> tx_queue_id,
> return -EINVAL;
> }
>
> - if (dev->data->dev_started) {
> - RTE_PMD_DEBUG_TRACE(
> - "port %d must be stopped to allow configuration\n", port_id);
> - return -EBUSY;
> - }
> -
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get,
> -ENOTSUP);
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup,
> -ENOTSUP);
>
> @@ -1596,6 +1593,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t
> tx_queue_id,
> return -EINVAL;
> }
>
> + if (dev->data->dev_started &&
> + !(dev_info.runtime_queue_setup_capa &
> + DEV_RUNTIME_TX_QUEUE_SETUP))
> + return -EBUSY;
> +
> [Rami Rosen] Shouldn't it be here: dev->data->tx_queue_state[...] instead of:
> dev->data->rx_queue_state[...] ? we are dealing
> with the TX queue.
Thanks, will fix.
>
> + if (dev->data->rx_queue_state[tx_queue_id] !=
> + RTE_ETH_QUEUE_STATE_STOPPED)
> + return -EBUSY;
> +
> txq = dev->data->tx_queues;
> if (txq[tx_queue_id]) {
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index
> 036153306..4e2088458 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -981,6 +981,11 @@ struct rte_eth_conf {
> */
> #define DEV_TX_OFFLOAD_SECURITY 0x00020000
>
> +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001 /**< Deferred setup
> rx
> +queue */ #define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002 /**<
> Deferred
> +setup tx queue */
> +
> /*
> * If new Tx offload capabilities are defined, they also must be
> * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> @@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
> /** Configured number of rx/tx queues */
> uint16_t nb_rx_queues; /**< Number of RX queues. */
> uint16_t nb_tx_queues; /**< Number of TX queues. */
> + uint64_t runtime_queue_setup_capa;
> + /**< queues can be setup after dev_start (DEV_DEFERRED_). */
> };
>
> /**
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue setup
2018-04-07 15:49 ` Rosen, Rami
@ 2018-04-08 2:22 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-08 2:22 UTC (permalink / raw)
To: Rosen, Rami, thomas, Ananyev, Konstantin
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Rosen, Rami
> Sent: Saturday, April 7, 2018 11:50 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue
> setup
>
>
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qi Zhang
> Sent: Monday, April 02, 2018 06:00
> To: thomas@monjalon.net; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for queue
> setup
>
> Add new command to setup queue:
> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
>
> rte_eth_[rx|tx]_queue_setup will be called corresponsively
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> v5:
> - fix command description.
>
> v4:
> - fix missing offload in command line.
>
> v3:
> - add offload parameter to queue setup command.
> - couple code refactory.
>
> app/test-pmd/cmdline.c | 129
> ++++++++++++++++++++++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
> 2 files changed, 136 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> d1dc1de6c..449c7c634 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void
> *parsed_result,
> "port tm hierarchy commit (port_id) (clean_on_fail)\n"
> " Commit tm hierarchy.\n\n"
>
> + "queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
> (offloads)\n"
> + " setup a rx or tx queue.\n\n"
> +
> , list_pkt_forwarding_modes()
> );
> }
> @@ -16030,6 +16033,131 @@ cmdline_parse_inst_t cmd_load_from_file = {
> },
> };
>
> +/* Queue Setup */
> +
> +/* Common result structure for queue setup */ struct
> +cmd_queue_setup_result {
> + cmdline_fixed_string_t queue;
> + cmdline_fixed_string_t setup;
> + cmdline_fixed_string_t rxtx;
> + portid_t port_id;
> + uint16_t queue_idx;
> + uint16_t ring_size;
> + uint64_t offloads;
> +};
> +
> +/* Common CLI fields for queue setup */ cmdline_parse_token_string_t
> +cmd_queue_setup_queue =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue,
> +"queue"); cmdline_parse_token_string_t cmd_queue_setup_setup =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup,
> +"setup"); cmdline_parse_token_string_t cmd_queue_setup_rxtx =
> + TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx,
> +"rx#tx"); cmdline_parse_token_num_t cmd_queue_setup_port_id =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id,
> UINT16);
> +cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx,
> +UINT16); cmdline_parse_token_num_t cmd_queue_setup_ring_size =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size,
> +UINT16); cmdline_parse_token_num_t cmd_queue_setup_offloads =
> + TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads,
> +UINT64);
> +
> +static void
> +cmd_queue_setup_parsed(
> + void *parsed_result,
> + __attribute__((unused)) struct cmdline *cl,
> + __attribute__((unused)) void *data)
> +{
> + struct cmd_queue_setup_result *res = parsed_result;
> + struct rte_port *port;
> + struct rte_mempool *mp;
> + unsigned int socket_id;
> + uint8_t rx = 1;
> + int ret;
> +
> + if (port_id_is_invalid(res->port_id, ENABLED_WARN))
> + return;
> +
> + if (!strcmp(res->rxtx, "tx"))
> + rx = 0;
> +
> + if (rx && res->ring_size <= rx_free_thresh) {
>
>
> [Rami Rosen] Nitpick: shouldn't it be: must > rx_free_thresh: ?
Good capture, will fix.
>
> + printf("Invalid ring_size, must >= rx_free_thresh: %d\n",
> + rx_free_thresh);
> + return;
> + }
> +
> + if (rx && res->queue_idx >= nb_rxq) {
> + printf("Invalid rx queue index, must < nb_rxq: %d\n",
> + nb_rxq);
> + return;
> + }
> +
> + if (!rx && res->queue_idx >= nb_txq) {
> + printf("Invalid tx queue index, must < nb_txq: %d\n",
> + nb_txq);
> + return;
> + }
> +
> + port = &ports[res->port_id];
> + if (rx) {
> + struct rte_eth_rxconf rxconf = port->rx_conf;
> +
> + rxconf.offloads = res->offloads;
> + socket_id = rxring_numa[res->port_id];
> + if (!numa_support || socket_id == NUMA_NO_CONFIG)
> + socket_id = port->socket_id;
> +
> + mp = mbuf_pool_find(socket_id);
> + if (mp == NULL) {
> + printf("Failed to setup RX queue: "
> + "No mempool allocation"
> + " on the socket %d\n",
> + rxring_numa[res->port_id]);
> + return;
> + }
> + ret = rte_eth_rx_queue_setup(res->port_id,
> + res->queue_idx,
> + res->ring_size,
> + socket_id,
> + &rxconf,
> + mp);
> + if (ret)
> + printf("Failed to setup RX queue\n");
> + } else {
> + struct rte_eth_txconf txconf = port->tx_conf;
> +
> + txconf.offloads = res->offloads;
> + socket_id = txring_numa[res->port_id];
> + if (!numa_support || socket_id == NUMA_NO_CONFIG)
> + socket_id = port->socket_id;
> +
> + ret = rte_eth_tx_queue_setup(res->port_id,
> + res->queue_idx,
> + res->ring_size,
> + socket_id,
> + &txconf);
> + if (ret)
> + printf("Failed to setup TX queue\n");
> + }
> +}
> +
> +cmdline_parse_inst_t cmd_queue_setup = {
> + .f = cmd_queue_setup_parsed,
> + .data = NULL,
> + .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size>
> <offloads>",
> + .tokens = {
> + (void *)&cmd_queue_setup_queue,
> + (void *)&cmd_queue_setup_setup,
> + (void *)&cmd_queue_setup_rxtx,
> + (void *)&cmd_queue_setup_port_id,
> + (void *)&cmd_queue_setup_queue_idx,
> + (void *)&cmd_queue_setup_ring_size,
> + (void *)&cmd_queue_setup_offloads,
> + NULL,
> + },
> +};
> +
> /*
> *****************************************************************
> *************** */
>
> /* list of instructions */
> @@ -16272,6 +16400,7 @@ cmdline_parse_ctx_t main_ctx[] = {
> (cmdline_parse_inst_t *)&cmd_del_port_tm_node,
> (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
> (cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
> + (cmdline_parse_inst_t *)&cmd_queue_setup,
> NULL,
> };
>
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index a766ac795..e8e0b0a4e 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1444,6 +1444,13 @@ Reset ptype mapping table::
>
> testpmd> ptype mapping reset (port_id)
>
> +queue setup
> +~~~~~~~~~~~
> +
> +Setup a rx or tx queue::
> +
> + testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size)
> + (offloads)
> +
> Port Functions
> --------------
>
> --
> 2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (7 preceding siblings ...)
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
@ 2018-04-08 2:42 ` Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
` (2 more replies)
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
10 siblings, 3 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-08 2:42 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
v6:
- fix tx queue state check in rte_eth_rx_queue_setup
- fix error message in testpmd.
v5:
- fix first tx queue check in i40e.
v4:
- fix i40e rx/tx funciton conflict handle.
- no need conflict check for first rx/tx queue at runtime setup.
- fix missing offload paramter in testpmd cmdline.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Qi Zhang (3):
ether: support runtime queue setup
app/testpmd: add command for queue setup
net/i40e: enable runtime queue setup
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++
doc/guides/nics/features.rst | 8 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 ++++++++++++++++++++++++----
lib/librte_ether/rte_ethdev.c | 30 +++--
lib/librte_ether/rte_ethdev.h | 7 ++
7 files changed, 333 insertions(+), 35 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-08 2:42 ` Qi Zhang
@ 2018-04-08 2:42 ` Qi Zhang
2018-04-10 13:59 ` Thomas Monjalon
2018-04-20 11:16 ` Ferruh Yigit
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for " Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 2 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-08 2:42 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support runtime queue configuraiton,
then base on the flag rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v6:
- fix tx queue state check in rte_eth_tx_queue_setup
doc/guides/nics/features.rst | 8 ++++++++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..6983faa4e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,15 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_queue_runtime_setup_capabilities:
+Queue runtime setup capabilities
+---------------------------------
+
+Supports queue setup / release after device started.
+
+* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 2c74f7e04..8638a2b82 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1441,12 +1441,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1490,6 +1484,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1589,12 +1592,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1612,6 +1609,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.runtime_queue_setup_capa &
+ DEV_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->tx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 5e13dca6a..6b6208a4b 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Deferred setup rx queue */
+#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Deferred setup tx queue */
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
/** Configured number of rx/tx queues */
uint16_t nb_rx_queues; /**< Number of RX queues. */
uint16_t nb_tx_queues; /**< Number of TX queues. */
+ uint64_t runtime_queue_setup_capa;
+ /**< queues can be setup after dev_start (DEV_DEFERRED_). */
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for queue setup
2018-04-08 2:42 ` Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
@ 2018-04-08 2:42 ` Qi Zhang
2018-04-20 11:29 ` Ferruh Yigit
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime " Qi Zhang
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-08 2:42 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue:
queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
rte_eth_[rx|tx]_queue_setup will be called corresponsively
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v6:
- fix error message for rx_free_thresh check.
v5:
- fix command description.
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 129 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 136 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 40b31ad7e..0752492ea 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -774,6 +774,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port tm hierarchy commit (port_id) (clean_on_fail)\n"
" Commit tm hierarchy.\n\n"
+ "queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)\n"
+ " setup a rx or tx queue.\n\n"
+
, list_pkt_forwarding_modes()
);
}
@@ -16030,6 +16033,131 @@ cmdline_parse_inst_t cmd_load_from_file = {
},
};
+/* Queue Setup */
+
+/* Common result structure for queue setup */
+struct cmd_queue_setup_result {
+ cmdline_fixed_string_t queue;
+ cmdline_fixed_string_t setup;
+ cmdline_fixed_string_t rxtx;
+ portid_t port_id;
+ uint16_t queue_idx;
+ uint16_t ring_size;
+ uint64_t offloads;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_queue_setup_queue =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, queue, "queue");
+cmdline_parse_token_string_t cmd_queue_setup_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, setup, "setup");
+cmdline_parse_token_string_t cmd_queue_setup_rxtx =
+ TOKEN_STRING_INITIALIZER(struct cmd_queue_setup_result, rxtx, "rx#tx");
+cmdline_parse_token_num_t cmd_queue_setup_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, port_id, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_queue_idx =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, queue_idx, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_ring_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, ring_size, UINT16);
+cmdline_parse_token_num_t cmd_queue_setup_offloads =
+ TOKEN_NUM_INITIALIZER(struct cmd_queue_setup_result, offloads, UINT64);
+
+static void
+cmd_queue_setup_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_queue_setup_result *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t rx = 1;
+ int ret;
+
+ if (port_id_is_invalid(res->port_id, ENABLED_WARN))
+ return;
+
+ if (!strcmp(res->rxtx, "tx"))
+ rx = 0;
+
+ if (rx && res->ring_size <= rx_free_thresh) {
+ printf("Invalid ring_size, must > rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (rx && res->queue_idx >= nb_rxq) {
+ printf("Invalid rx queue index, must < nb_rxq: %d\n",
+ nb_rxq);
+ return;
+ }
+
+ if (!rx && res->queue_idx >= nb_txq) {
+ printf("Invalid tx queue index, must < nb_txq: %d\n",
+ nb_txq);
+ return;
+ }
+
+ port = &ports[res->port_id];
+ if (rx) {
+ struct rte_eth_rxconf rxconf = port->rx_conf;
+
+ rxconf.offloads = res->offloads;
+ socket_id = rxring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->port_id]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &rxconf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ struct rte_eth_txconf txconf = port->tx_conf;
+
+ txconf.offloads = res->offloads;
+ socket_id = txring_numa[res->port_id];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->port_id,
+ res->queue_idx,
+ res->ring_size,
+ socket_id,
+ &txconf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_queue_setup = {
+ .f = cmd_queue_setup_parsed,
+ .data = NULL,
+ .help_str = "queue setup <rx|tx> <port_id> <queue_idx> <ring_size> <offloads>",
+ .tokens = {
+ (void *)&cmd_queue_setup_queue,
+ (void *)&cmd_queue_setup_setup,
+ (void *)&cmd_queue_setup_rxtx,
+ (void *)&cmd_queue_setup_port_id,
+ (void *)&cmd_queue_setup_queue_idx,
+ (void *)&cmd_queue_setup_ring_size,
+ (void *)&cmd_queue_setup_offloads,
+ NULL,
+ },
+};
+
/* ******************************************************************************** */
/* list of instructions */
@@ -16272,6 +16400,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_del_port_tm_node,
(cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
(cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
+ (cmdline_parse_inst_t *)&cmd_queue_setup,
NULL,
};
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index a766ac795..319b9a2d8 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1444,6 +1444,13 @@ Reset ptype mapping table::
testpmd> ptype mapping reset (port_id)
+queue setup
+~~~~~~~~~~~
+
+Setup a rx or tx queue::
+
+ testpmd> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
+
Port Functions
--------------
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime queue setup
2018-04-08 2:42 ` Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-04-08 2:42 ` Qi Zhang
2018-04-20 11:17 ` Ferruh Yigit
2 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-08 2:42 UTC (permalink / raw)
To: thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v5:
- fix first tx queue check.
v4:
- fix rx/tx conflict check.
- no need conflict check for first rx/tx queue at runtime setup.
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 +++++++++++++++++++++++++++++++++++------
2 files changed, 164 insertions(+), 23 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index d0bf4e349..9e18ffd4e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3240,6 +3240,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->runtime_queue_setup_capa =
+ DEV_RUNTIME_RX_QUEUE_SETUP |
+ DEV_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1217e5a61..0115ae731 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1692,6 +1692,75 @@ i40e_dev_supported_ptypes_get(struct rte_eth_dev *dev)
return NULL;
}
+static int
+i40e_dev_first_queue(uint16_t idx, void **queues, int num)
+{
+ uint16_t i;
+
+ for (i = 0; i < num; i++) {
+ if (i != idx && queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_rx_queue *rxq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ int use_scattered_rx =
+ ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size);
+
+ if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(rxq->queue_id,
+ dev->data->rx_queues,
+ dev->data->nb_rx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_rx_function.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->rx_vec_allowed = true;
+ dev->data->scattered_rx = use_scattered_rx;
+ if (use_def_burst_func)
+ ad->rx_bulk_alloc_allowed = false;
+ i40e_set_rx_function(dev);
+ return 0;
+ }
+
+ /* check bulk alloc conflict */
+ if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
+ PMD_DRV_LOG(ERR, "Can't use default burst.");
+ return -EINVAL;
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx && use_scattered_rx) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required.");
+ return -EINVAL;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1808,25 +1877,6 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_rx_queue(rxq);
rxq->q_set = TRUE;
- dev->data->rx_queues[queue_idx] = rxq;
-
- use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq);
-
- if (!use_def_burst_func) {
-#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "satisfied. Rx Burst Bulk Alloc function will be "
- "used on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
-#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
- } else {
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "not satisfied, Scattered Rx is requested, "
- "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
- "not enabled on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
- ad->rx_bulk_alloc_allowed = false;
- }
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -1841,6 +1891,34 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_rx_queue_setup_runtime(dev, rxq)) {
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ } else {
+ use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+ }
+
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
@@ -1972,6 +2050,55 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
return RTE_ETH_TX_DESC_FULL;
}
+static int
+i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_tx_queue *txq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (i40e_tx_queue_init(txq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(txq->queue_id,
+ dev->data->tx_queues,
+ dev->data->nb_tx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_tx_function.
+ */
+ ad->tx_simple_allowed = true;
+ ad->tx_vec_allowed = true;
+ i40e_set_tx_function_flag(dev, txq);
+ i40e_set_tx_function(dev);
+ return 0;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup.");
+ return -EINVAL;
+ }
+ }
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ (txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST)) {
+ }
+ PMD_DRV_LOG(ERR, "No-simple tx is required.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -2144,10 +2271,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_tx_queue(txq);
txq->q_set = TRUE;
- dev->data->tx_queues[queue_idx] = txq;
-
- /* Use a simple TX queue without offloads or multi segs if possible */
- i40e_set_tx_function_flag(dev, txq);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -2162,6 +2285,20 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_tx_queue_setup_runtime(dev, txq)) {
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ } else {
+ /**
+ * Use a simple TX queue without offloads or
+ * multi segs if possible
+ */
+ i40e_set_tx_function_flag(dev, txq);
+ }
+ dev->data->tx_queues[queue_idx] = txq;
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
@ 2018-04-10 13:59 ` Thomas Monjalon
2018-04-20 11:14 ` Ferruh Yigit
2018-04-24 19:36 ` Thomas Monjalon
2018-04-20 11:16 ` Ferruh Yigit
1 sibling, 2 replies; 95+ messages in thread
From: Thomas Monjalon @ 2018-04-10 13:59 UTC (permalink / raw)
To: Qi Zhang; +Cc: dev, konstantin.ananyev, beilei.xing, jingjing.wu, wenzhuo.lu
Hi,
Please replace ether and etherdev by ethdev (in title and text).
08/04/2018 04:42, Qi Zhang:
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support runtime queue configuraiton,
typo: configuration
> then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
Generally speaking, it is easier to read when broke in several sentences,
and starting with the problem statement.
Example:
"
It is not possible to setup a queue when the port is started
because of a check in ethdev layer.
New capability flags are added in order to relax this check
for devices which support queue setup in runtime.
The functions rte_eth_[rx|tx]_queue_setup will raise an error only
if the port is started and runtime setup of queue is not supported.
"
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -981,6 +981,11 @@ struct rte_eth_conf {
> */
> #define DEV_TX_OFFLOAD_SECURITY 0x00020000
>
> +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
> +/**< Deferred setup rx queue */
> +#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
> +/**< Deferred setup tx queue */
Please use RTE_ETH_ prefix.
> /*
> * If new Tx offload capabilities are defined, they also must be
> * mentioned in rte_tx_offload_names in rte_ethdev.c file.
> @@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
> /** Configured number of rx/tx queues */
> uint16_t nb_rx_queues; /**< Number of RX queues. */
> uint16_t nb_tx_queues; /**< Number of TX queues. */
> + uint64_t runtime_queue_setup_capa;
> + /**< queues can be setup after dev_start (DEV_DEFERRED_). */
Why using uint64_t for that?
Maybe these flags can find another place, less specific.
What about a field for all setup capabilities? setup_capa?
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-10 13:59 ` Thomas Monjalon
@ 2018-04-20 11:14 ` Ferruh Yigit
2018-04-24 19:36 ` Thomas Monjalon
1 sibling, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-20 11:14 UTC (permalink / raw)
To: Thomas Monjalon, Qi Zhang
Cc: dev, konstantin.ananyev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/10/2018 2:59 PM, Thomas Monjalon wrote:
> Hi,
>
> Please replace ether and etherdev by ethdev (in title and text).
>
> 08/04/2018 04:42, Qi Zhang:
>> The patch let etherdev driver expose the capability flag through
>> rte_eth_dev_info_get when it support runtime queue configuraiton,
>
> typo: configuration
>
>> then base on the flag rte_eth_[rx|tx]_queue_setup could decide
>> continue to setup the queue or just return fail when device already
>> started.
>
> Generally speaking, it is easier to read when broke in several sentences,
> and starting with the problem statement.
> Example:
> "
> It is not possible to setup a queue when the port is started
> because of a check in ethdev layer.
> New capability flags are added in order to relax this check
> for devices which support queue setup in runtime.
> The functions rte_eth_[rx|tx]_queue_setup will raise an error only
> if the port is started and runtime setup of queue is not supported.
> "
>>
>> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>> --- a/lib/librte_ether/rte_ethdev.h
>> +++ b/lib/librte_ether/rte_ethdev.h
>> @@ -981,6 +981,11 @@ struct rte_eth_conf {
>> */
>> #define DEV_TX_OFFLOAD_SECURITY 0x00020000
>>
>> +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
>> +/**< Deferred setup rx queue */
>> +#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
>> +/**< Deferred setup tx queue */
>
> Please use RTE_ETH_ prefix.
>
>> /*
>> * If new Tx offload capabilities are defined, they also must be
>> * mentioned in rte_tx_offload_names in rte_ethdev.c file.
>> @@ -1029,6 +1034,8 @@ struct rte_eth_dev_info {
>> /** Configured number of rx/tx queues */
>> uint16_t nb_rx_queues; /**< Number of RX queues. */
>> uint16_t nb_tx_queues; /**< Number of TX queues. */
>> + uint64_t runtime_queue_setup_capa;
>> + /**< queues can be setup after dev_start (DEV_DEFERRED_). */
>
> Why using uint64_t for that?
> Maybe these flags can find another place, less specific.
> What about a field for all setup capabilities? setup_capa?
I was about to make similar comment, why not start a more generic capabilities
variable [1].
And make flag values more generic like: "DEV_CAPA_RUNTIME_RX_QUEUE_SETUP" etc ..
[1]
Previously a few times mentioned that there is no way for application to get
device capabilities dynamically, that is true, with offload capabilities flag
this has been solved for offloading but still remaining as a generic issue.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
2018-04-10 13:59 ` Thomas Monjalon
@ 2018-04-20 11:16 ` Ferruh Yigit
1 sibling, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-20 11:16 UTC (permalink / raw)
To: Qi Zhang, thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/8/2018 3:42 AM, Qi Zhang wrote:
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support runtime queue configuraiton,
> then base on the flag rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> v6:
> - fix tx queue state check in rte_eth_tx_queue_setup
>
> doc/guides/nics/features.rst | 8 ++++++++
> lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> lib/librte_ether/rte_ethdev.h | 7 +++++++
> 3 files changed, 33 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 1b4fb979f..6983faa4e 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -892,7 +892,15 @@ Documentation describes performance values.
>
> See ``dpdk.org/doc/perf/*``.
>
> +.. _nic_features_queue_runtime_setup_capabilities:
>
> +Queue runtime setup capabilities
> +---------------------------------
> +
> +Supports queue setup / release after device started.
> +
> +* **[provides] rte_eth_dev_info**: ``runtime_queue_config_capa:DEV_RUNTIME_RX_QUEUE_SETUP,DEV_RUNTIME_TX_QUEUE_SETUP``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
New feature added, can you please add this into default.ini file, and is it
possible to shorten the feature name?
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime queue setup
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime " Qi Zhang
@ 2018-04-20 11:17 ` Ferruh Yigit
0 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-20 11:17 UTC (permalink / raw)
To: Qi Zhang, thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/8/2018 3:42 AM, Qi Zhang wrote:
> Expose the runtime queue configuration capability and enhance
> i40e_dev_[rx|tx]_queue_setup to handle the situation when
> device already started.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> v5:
> - fix first tx queue check.
>
> v4:
> - fix rx/tx conflict check.
> - no need conflict check for first rx/tx queue at runtime setup.
>
> v3:
> - no queue start/stop in setup/release
> - return fail when required rx/tx function conflict with
> exist setup
>
> drivers/net/i40e/i40e_ethdev.c | 4 +
> drivers/net/i40e/i40e_rxtx.c | 183 +++++++++++++++++++++++++++++++++++------
Can you please update *i40e*.ini file to announce new introduced feature.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for queue setup
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for " Qi Zhang
@ 2018-04-20 11:29 ` Ferruh Yigit
2018-04-22 11:57 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-20 11:29 UTC (permalink / raw)
To: Qi Zhang, thomas, konstantin.ananyev
Cc: dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/8/2018 3:42 AM, Qi Zhang wrote:
> Add new command to setup queue:
> queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
My almost classic comment for testpmd:
Do we need a new high level command "queue setup" for this. Can't we extend
existing port/queue setting commands for the sake of the usability. Each feature
is trying to add its new command in its new syntax what suits to author.
And why both (ring_size) and (offloads) set in same command, as far as I can see
you can't ignore them, so to set ring_size I should know offloads for queue too,
they don't look too related why setting together in same command.
Isn't there any other command to set ring_size? I would be surprised if there is
no. And there are a few new command for setting/getting offloads. Can't we
re-use them?
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for queue setup
2018-04-20 11:29 ` Ferruh Yigit
@ 2018-04-22 11:57 ` Zhang, Qi Z
0 siblings, 0 replies; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-22 11:57 UTC (permalink / raw)
To: Yigit, Ferruh, thomas, Ananyev, Konstantin
Cc: dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
Hi Ferruh:
> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, April 20, 2018 7:30 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for
> queue setup
>
> On 4/8/2018 3:42 AM, Qi Zhang wrote:
> > Add new command to setup queue:
> > queue setup (rx|tx) (port_id) (queue_idx) (ring_size) (offloads)
>
> My almost classic comment for testpmd:
> Do we need a new high level command "queue setup" for this. Can't we
> extend existing port/queue setting commands for the sake of the usability.
> Each feature is trying to add its new command in its new syntax what suits to
> author.
>
> And why both (ring_size) and (offloads) set in same command, as far as I can
> see you can't ignore them, so to set ring_size I should know offloads for
> queue too, they don't look too related why setting together in same
> command.
>
> Isn't there any other command to set ring_size? I would be surprised if there
> is no. And there are a few new command for setting/getting offloads. Can't
> we re-use them?
Thanks for all the comments, please check my v7 which follow your suggestion.
Regards
Qi
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 0/5] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (8 preceding siblings ...)
2018-04-08 2:42 ` Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 1/5] ethdev: support " Qi Zhang
` (5 more replies)
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
10 siblings, 6 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
From: Qi Zhang <Qi.z.zhang@intel.com>
v7:
- update default.ini and i40e.ini.
- rename runtime_queue_setup_capa to dev_capa for generic.
- testpmd queue setup command be moved to "ports" command group.
- remove ring_size and offload from queue setup command in testpmd.
- enable per queue config in testpmd.
- enable queue ring size configure command in testpmd.
- fix couple typo.
TODO:
queue offload config commmand is not implemented yet, but per queue
configure data structure is already supported in PATCH 3
v6:
- fix tx queue state check in rte_eth_rx_queue_setup
- fix error message in testpmd.
v5:
- fix first tx queue check in i40e.
v4:
- fix i40e rx/tx funciton conflict handle.
- no need conflict check for first rx/tx queue at runtime setup.
- fix missing offload paramter in testpmd cmdline.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
According to exist implementation,rte_eth_[rx|tx]_queue_setup will
always return fail if device is already started(rte_eth_dev_start).
This can't satisfied the usage when application want to deferred setup
part of the queues while keep traffic running on those queues already
be setup.
example:
rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_rx_queue_setup(idx = 0 ...)
rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
Basically this is not a general hardware limitation, because for NIC
like i40e, ixgbe, it is not necessary to stop the whole device before
configure a fresh queue or reconfigure an exist queue with no traffic
on it.
The patch let etherdev driver expose the capability flag through
rte_eth_dev_info_get when it support deferred queue configuraiton,
then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
continue to setup the queue or just return fail when device already
started.
Qi Zhang (1):
net/i40e: enable runtime queue setup
qi Zhang (4):
ethdev: support runtime queue setup
app/testpmd: add command for queue setup
app/testpmd: enable per queue configure
app/testpmd: enable queue ring size configure
app/test-pmd/cmdline.c | 217 ++++++++++++++++++++++++++++
app/test-pmd/config.c | 48 +++---
app/test-pmd/testpmd.c | 101 ++++++++-----
app/test-pmd/testpmd.h | 6 +-
doc/guides/nics/features.rst | 18 +++
doc/guides/nics/features/default.ini | 2 +
doc/guides/nics/features/i40e.ini | 2 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 16 ++
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 ++++++++++++++++++++---
lib/librte_ether/rte_ethdev.c | 30 ++--
lib/librte_ether/rte_ethdev.h | 7 +
12 files changed, 540 insertions(+), 94 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 1/5] ethdev: support runtime queue setup
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 2/5] app/testpmd: add command for " Qi Zhang
` (4 subsequent siblings)
5 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
It's not possible to setup a queue when the port is started
because of a check in ethdev layer. New capability flags are
added in order to relax this check for devices which support
queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
will raise an error only if the port is started and runtime setup
of queue is not supported.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v7:
- update default.init
- rename runtime_queue_setup_capa to dev_capa for generic.
- fix typo.
v6:
- fix tx queue state check in rte_eth_tx_queue_setup
doc/guides/nics/features.rst | 18 ++++++++++++++++++
doc/guides/nics/features/default.ini | 2 ++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
4 files changed, 45 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..67d459f80 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,25 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_runtime_rx_queue_setup:
+Runtime Rx queue setup
+----------------------
+
+Supports Rx queue setup after device started.
+
+* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_RX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
+
+.. _nic_features_runtime_tx_queue_setup:
+
+Runtime Tx queue setup
+----------------------
+
+Supports Tx queue setup after device started.
+
+* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index dae2ad776..dae80d52f 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -78,3 +78,5 @@ x86-64 =
Usage doc =
Design doc =
Perf doc =
+Runtime Rx queue setup =
+Runtime Tx queue setup =
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 54d6bf355..dd8e38dfa 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1413,12 +1413,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1470,6 +1464,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.dev_capa &
+ DEV_CAPA_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1545,12 +1548,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1575,6 +1572,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.dev_capa &
+ DEV_CAPA_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->tx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 7e4e57b3c..c775a64a1 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -981,6 +981,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_SECURITY 0x00020000
+#define DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Device supports Rx queue setup after device started*/
+#define DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Device supports Tx queue setup after device started*/
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1054,6 +1059,8 @@ struct rte_eth_dev_info {
struct rte_eth_dev_portconf default_rxportconf;
/** Tx parameter recommendations */
struct rte_eth_dev_portconf default_txportconf;
+ /** Generic device capabilities */
+ uint64_t dev_capa;
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 2/5] app/testpmd: add command for queue setup
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 1/5] ethdev: support " Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure Qi Zhang
` (3 subsequent siblings)
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue, rte_eth_[rx|tx]_queue_setup will
be called corresponsively.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v7:
- remove ring_size and offload paramters and move to "ports" commmand
group.
v6:
- fix error message for rx_free_thresh check.
v5:
- fix command description.
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 115 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 122 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d9b1435a2..8ce7eb1f5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -850,6 +850,9 @@ static void cmd_help_long_parsed(void *parsed_result,
" Start/stop a rx/tx queue of port X. Only take effect"
" when port X is started\n\n"
+ "port (port_id) (rxq|txq) (queue_id) setup\n"
+ " Setup a rx/tx queue of port X.\n\n"
+
"port config (port_id|all) l2-tunnel E-tag ether-type"
" (value)\n"
" Set the value of E-tag ether-type.\n\n"
@@ -2282,6 +2285,117 @@ cmdline_parse_inst_t cmd_config_rxtx_queue = {
},
};
+/* *** configure port rxq/txq setup *** */
+struct cmd_setup_rxtx_queue {
+ cmdline_fixed_string_t port;
+ portid_t portid;
+ cmdline_fixed_string_t rxtxq;
+ uint16_t qid;
+ cmdline_fixed_string_t setup;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, port, "port");
+cmdline_parse_token_num_t cmd_setup_rxtx_queue_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, portid, UINT16);
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_rxtxq =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, rxtxq, "rxq#txq");
+cmdline_parse_token_num_t cmd_setup_rxtx_queue_qid =
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, qid, UINT16);
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, setup, "setup");
+
+static void
+cmd_setup_rxtx_queue_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_setup_rxtx_queue *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t isrx = 0;
+ int ret;
+
+ if (port_id_is_invalid(res->portid, ENABLED_WARN))
+ return;
+
+ if (res->portid == (portid_t)RTE_PORT_ALL) {
+ printf("Invalid port id\n");
+ return;
+ }
+
+ if (!strcmp(res->rxtxq, "rxq"))
+ isrx = 1;
+ else if (!strcmp(res->rxtxq, "txq"))
+ isrx = 0;
+ else {
+ printf("Unknown parameter\n");
+ return;
+ }
+
+ if (isrx && rx_queue_id_is_invalid(res->qid)) {
+ printf("Invalid rx queue\n");
+ return;
+ } else if (!isrx && tx_queue_id_is_invalid(res->qid)) {
+ printf("Invalid tx queue\n");
+ return;
+ }
+
+ port = &ports[res->portid];
+ if (isrx) {
+ socket_id = rxring_numa[res->portid];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->portid]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->portid,
+ res->qid,
+ nb_rxd,
+ socket_id,
+ &port->rx_conf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ socket_id = txring_numa[res->portid];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->portid,
+ res->qid,
+ nb_txd,
+ socket_id,
+ &port->tx_conf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_setup_rxtx_queue = {
+ .f = cmd_setup_rxtx_queue_parsed,
+ .data = NULL,
+ .help_str = "port <port_id> rxq|txq <queue_idx> setup",
+ .tokens = {
+ (void *)&cmd_setup_rxtx_queue_port,
+ (void *)&cmd_setup_rxtx_queue_portid,
+ (void *)&cmd_setup_rxtx_queue_rxtxq,
+ (void *)&cmd_setup_rxtx_queue_qid,
+ (void *)&cmd_setup_rxtx_queue_setup,
+ NULL,
+ },
+};
+
+
/* *** Configure RSS RETA *** */
struct cmd_config_rss_reta {
cmdline_fixed_string_t port;
@@ -16233,6 +16347,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
+ (cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_config_rss_reta,
(cmdline_parse_inst_t *)&cmd_showport_reta,
(cmdline_parse_inst_t *)&cmd_config_burst,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index cb6f201e1..07a43aeeb 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1630,6 +1630,13 @@ Start/stop a rx/tx queue on a specific port::
testpmd> port (port_id) (rxq|txq) (queue_id) (start|stop)
+port setup queue
+~~~~~~~~~~~~~~~~~~~~~
+
+Setup a rx/tx queue on a specific port::
+
+ testpmd> port (port_id) (rxq|txq) (queue_id) setup
+
Only take effect when port is started.
port config - speed
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 1/5] ethdev: support " Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 2/5] app/testpmd: add command for " Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure Qi Zhang
` (2 subsequent siblings)
5 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Each queue has independent configure information in rte_port.
Base on this, we are able to add new commands to configure
different queues with different value.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/cmdline.c | 8 ++--
app/test-pmd/config.c | 48 ++++++++++++++---------
app/test-pmd/testpmd.c | 101 ++++++++++++++++++++++++++++++-------------------
app/test-pmd/testpmd.h | 6 ++-
4 files changed, 100 insertions(+), 63 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8ce7eb1f5..b50e11e60 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2360,9 +2360,9 @@ cmd_setup_rxtx_queue_parsed(
}
ret = rte_eth_rx_queue_setup(res->portid,
res->qid,
- nb_rxd,
+ port->nb_rx_desc[res->qid],
socket_id,
- &port->rx_conf,
+ &port->rx_conf[res->qid],
mp);
if (ret)
printf("Failed to setup RX queue\n");
@@ -2373,9 +2373,9 @@ cmd_setup_rxtx_queue_parsed(
ret = rte_eth_tx_queue_setup(res->portid,
res->qid,
- nb_txd,
+ port->nb_tx_desc[res->qid],
socket_id,
- &port->tx_conf);
+ &port->tx_conf[res->qid]);
if (ret)
printf("Failed to setup TX queue\n");
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 5daa93bb3..de5c048f6 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1738,6 +1738,7 @@ void
rxtx_config_display(void)
{
portid_t pid;
+ queueid_t qid;
printf(" %s packet forwarding%s packets/burst=%d\n",
cur_fwd_eng->fwd_mode_name,
@@ -1752,30 +1753,41 @@ rxtx_config_display(void)
nb_fwd_lcores, nb_fwd_ports);
RTE_ETH_FOREACH_DEV(pid) {
- struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf;
- struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf;
+ struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf[0];
+ struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf[0];
printf(" port %d:\n", (unsigned int)pid);
printf(" CRC stripping %s\n",
(ports[pid].dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_CRC_STRIP) ?
"enabled" : "disabled");
- printf(" RX queues=%d - RX desc=%d - RX free threshold=%d\n",
- nb_rxq, nb_rxd, rx_conf->rx_free_thresh);
- printf(" RX threshold registers: pthresh=%d hthresh=%d "
- " wthresh=%d\n",
- rx_conf->rx_thresh.pthresh,
- rx_conf->rx_thresh.hthresh,
- rx_conf->rx_thresh.wthresh);
- printf(" TX queues=%d - TX desc=%d - TX free threshold=%d\n",
- nb_txq, nb_txd, tx_conf->tx_free_thresh);
- printf(" TX threshold registers: pthresh=%d hthresh=%d "
- " wthresh=%d\n",
- tx_conf->tx_thresh.pthresh,
- tx_conf->tx_thresh.hthresh,
- tx_conf->tx_thresh.wthresh);
- printf(" TX RS bit threshold=%d - TXQ offloads=0x%"PRIx64"\n",
- tx_conf->tx_rs_thresh, tx_conf->offloads);
+ printf(" RX queues = %d\n", nb_rxq);
+ for (qid = 0; qid < nb_rxq; qid++) {
+ printf(" Queue Index = %d\n", qid);
+ printf(" RX desc=%d - RX free threshold=%d\n",
+ ports[pid].nb_rx_desc[qid],
+ rx_conf[qid].rx_free_thresh);
+ printf(" RX threshold registers: pthresh=%d hthresh=%d "
+ " wthresh=%d\n",
+ rx_conf[qid].rx_thresh.pthresh,
+ rx_conf[qid].rx_thresh.hthresh,
+ rx_conf[qid].rx_thresh.wthresh);
+ }
+ printf(" TX queues = %d\n", nb_txq);
+ for (qid = 0; qid < nb_txq; qid++) {
+ printf(" Queue Index = %d\n", qid);
+ printf(" TX desc=%d - TX free threshold=%d\n",
+ ports[pid].nb_tx_desc[qid],
+ tx_conf[qid].tx_free_thresh);
+ printf(" TX threshold registers: pthresh=%d hthresh=%d "
+ " wthresh=%d\n",
+ tx_conf[qid].tx_thresh.pthresh,
+ tx_conf[qid].tx_thresh.hthresh,
+ tx_conf[qid].tx_thresh.wthresh);
+ printf(" TX RS bit threshold=%d - TXQ offloads=0x%"PRIx64"\n",
+ tx_conf[qid].tx_rs_thresh,
+ tx_conf[qid].offloads);
+ }
}
}
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d6da41927..f9b637ba8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1593,20 +1593,24 @@ start_port(portid_t pid)
}
if (port->need_reconfig_queues > 0) {
port->need_reconfig_queues = 0;
- port->tx_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
- /* Apply Tx offloads configuration */
- port->tx_conf.offloads = port->dev_conf.txmode.offloads;
/* setup tx queues */
for (qi = 0; qi < nb_txq; qi++) {
+ port->tx_conf[qi].txq_flags =
+ ETH_TXQ_FLAGS_IGNORE;
+ /* Apply Tx offloads configuration */
+ port->tx_conf[qi].offloads =
+ port->dev_conf.txmode.offloads;
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,
- nb_txd,txring_numa[pi],
- &(port->tx_conf));
+ port->nb_tx_desc[qi],
+ txring_numa[pi],
+ &(port->tx_conf[qi]));
else
diag = rte_eth_tx_queue_setup(pi, qi,
- nb_txd,port->socket_id,
- &(port->tx_conf));
+ port->nb_tx_desc[qi],
+ port->socket_id,
+ &(port->tx_conf[qi]));
if (diag == 0)
continue;
@@ -1617,15 +1621,17 @@ start_port(portid_t pid)
RTE_PORT_STOPPED) == 0)
printf("Port %d can not be set back "
"to stopped\n", pi);
- printf("Fail to configure port %d tx queues\n", pi);
+ printf("Fail to configure port %d tx queues\n",
+ pi);
/* try to reconfigure queues next time */
port->need_reconfig_queues = 1;
return -1;
}
- /* Apply Rx offloads configuration */
- port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
- /* setup rx queues */
for (qi = 0; qi < nb_rxq; qi++) {
+ /* Apply Rx offloads configuration */
+ port->rx_conf[qi].offloads =
+ port->dev_conf.rxmode.offloads;
+ /* setup rx queues */
if ((numa_support) &&
(rxring_numa[pi] != NUMA_NO_CONFIG)) {
struct rte_mempool * mp =
@@ -1639,8 +1645,10 @@ start_port(portid_t pid)
}
diag = rte_eth_rx_queue_setup(pi, qi,
- nb_rxd,rxring_numa[pi],
- &(port->rx_conf),mp);
+ port->nb_rx_desc[pi],
+ rxring_numa[pi],
+ &(port->rx_conf[qi]),
+ mp);
} else {
struct rte_mempool *mp =
mbuf_pool_find(port->socket_id);
@@ -1652,8 +1660,10 @@ start_port(portid_t pid)
return -1;
}
diag = rte_eth_rx_queue_setup(pi, qi,
- nb_rxd,port->socket_id,
- &(port->rx_conf), mp);
+ port->nb_rx_desc[pi],
+ port->socket_id,
+ &(port->rx_conf[qi]),
+ mp);
}
if (diag == 0)
continue;
@@ -1664,7 +1674,8 @@ start_port(portid_t pid)
RTE_PORT_STOPPED) == 0)
printf("Port %d can not be set back "
"to stopped\n", pi);
- printf("Fail to configure port %d rx queues\n", pi);
+ printf("Fail to configure port %d rx queues\n",
+ pi);
/* try to reconfigure queues next time */
port->need_reconfig_queues = 1;
return -1;
@@ -2225,39 +2236,51 @@ map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
static void
rxtx_port_config(struct rte_port *port)
{
- port->rx_conf = port->dev_info.default_rxconf;
- port->tx_conf = port->dev_info.default_txconf;
+ uint16_t qid;
- /* Check if any RX/TX parameters have been passed */
- if (rx_pthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.pthresh = rx_pthresh;
+ for (qid = 0; qid < nb_rxq; qid++) {
+ port->rx_conf[qid] = port->dev_info.default_rxconf;
- if (rx_hthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.hthresh = rx_hthresh;
+ /* Check if any Rx parameters have been passed */
+ if (rx_pthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.pthresh = rx_pthresh;
- if (rx_wthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.wthresh = rx_wthresh;
+ if (rx_hthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.hthresh = rx_hthresh;
- if (rx_free_thresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_free_thresh = rx_free_thresh;
+ if (rx_wthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.wthresh = rx_wthresh;
- if (rx_drop_en != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_drop_en = rx_drop_en;
+ if (rx_free_thresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_free_thresh = rx_free_thresh;
- if (tx_pthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.pthresh = tx_pthresh;
+ if (rx_drop_en != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_drop_en = rx_drop_en;
- if (tx_hthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.hthresh = tx_hthresh;
+ port->nb_rx_desc[qid] = nb_rxd;
+ }
+
+ for (qid = 0; qid < nb_txq; qid++) {
+ port->tx_conf[qid] = port->dev_info.default_txconf;
+
+ /* Check if any Tx parameters have been passed */
+ if (tx_pthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.pthresh = tx_pthresh;
- if (tx_wthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.wthresh = tx_wthresh;
+ if (tx_hthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.hthresh = tx_hthresh;
- if (tx_rs_thresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_rs_thresh = tx_rs_thresh;
+ if (tx_wthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.wthresh = tx_wthresh;
- if (tx_free_thresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_free_thresh = tx_free_thresh;
+ if (tx_rs_thresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_rs_thresh = tx_rs_thresh;
+
+ if (tx_free_thresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_free_thresh = tx_free_thresh;
+
+ port->nb_tx_desc[qid] = nb_txd;
+ }
}
void
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 070919822..6f6eada66 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -194,8 +194,10 @@ struct rte_port {
uint8_t need_reconfig_queues; /**< need reconfiguring queues or not */
uint8_t rss_flag; /**< enable rss or not */
uint8_t dcb_flag; /**< enable dcb */
- struct rte_eth_rxconf rx_conf; /**< rx configuration */
- struct rte_eth_txconf tx_conf; /**< tx configuration */
+ uint16_t nb_rx_desc[MAX_QUEUE_ID+1]; /**< per queue rx desc number */
+ uint16_t nb_tx_desc[MAX_QUEUE_ID+1]; /**< per queue tx desc number */
+ struct rte_eth_rxconf rx_conf[MAX_QUEUE_ID+1]; /**< per queue rx configuration */
+ struct rte_eth_txconf tx_conf[MAX_QUEUE_ID+1]; /**< per queue tx configuration */
struct ether_addr *mc_addr_pool; /**< pool of multicast addrs */
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
uint8_t slave_flag; /**< bonding slave port */
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
` (2 preceding siblings ...)
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 5/5] net/i40e: enable runtime queue setup Qi Zhang
2018-04-23 17:45 ` [dpdk-dev] [PATCH v7 0/5] " Ferruh Yigit
5 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add command to change specific queue's ring size configure,
the new value will only take effect after command that restart
the device(port stop <port_id>/port start <port_id>) or command
that setup the queue(port <port_id> rxq <qid> setup) at runtime.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
app/test-pmd/cmdline.c | 102 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 +++
2 files changed, 111 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b50e11e60..22e4d4585 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -846,6 +846,11 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config mtu X value\n"
" Set the MTU of port X to a given value\n\n"
+ "port config (port_id) (rxq|txq) (queue_id) ring_size (value)\n"
+ " Set a rx/tx queue's ring size configuration, the new"
+ " value will take effect after command that (re-)start the port"
+ " or command that setup the specific queue\n\n"
+
"port (port_id) (rxq|txq) (queue_id) (start|stop)\n"
" Start/stop a rx/tx queue of port X. Only take effect"
" when port X is started\n\n"
@@ -2191,6 +2196,102 @@ cmdline_parse_inst_t cmd_config_rss_hash_key = {
},
};
+/* *** configure port rxq/txq ring size *** */
+struct cmd_config_rxtx_ring_size {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t config;
+ portid_t portid;
+ cmdline_fixed_string_t rxtxq;
+ uint16_t qid;
+ cmdline_fixed_string_t rsize;
+ uint16_t size;
+};
+
+static void
+cmd_config_rxtx_ring_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_rxtx_ring_size *res = parsed_result;
+ struct rte_port *port;
+ uint8_t isrx;
+
+ if (port_id_is_invalid(res->portid, ENABLED_WARN))
+ return;
+
+ if (res->portid == (portid_t)RTE_PORT_ALL) {
+ printf("Invalid port id\n");
+ return;
+ }
+
+ port = &ports[res->portid];
+
+ if (!strcmp(res->rxtxq, "rxq"))
+ isrx = 1;
+ else if (!strcmp(res->rxtxq, "txq"))
+ isrx = 0;
+ else {
+ printf("Unknown parameter\n");
+ return;
+ }
+
+ if (isrx && rx_queue_id_is_invalid(res->qid))
+ return;
+ else if (!isrx && tx_queue_id_is_invalid(res->qid))
+ return;
+
+ if (isrx && res->size != 0 && res->size <= rx_free_thresh) {
+ printf("Invalid rx ring_size, must > rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (isrx)
+ port->nb_rx_desc[res->qid] = res->size;
+ else
+ port->nb_tx_desc[res->qid] = res->size;
+
+ cmd_reconfig_device_queue(res->portid, 0, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_config =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ config, "config");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ portid, UINT16);
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rxtxq =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ rxtxq, "rxq#txq");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_qid =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ qid, UINT16);
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rsize =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ rsize, "ring_size");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ size, UINT16);
+
+cmdline_parse_inst_t cmd_config_rxtx_ring_size = {
+ .f = cmd_config_rxtx_ring_size_parsed,
+ .data = NULL,
+ .help_str = "port config <port_id> rxq|txq <queue_id> ring_size <value>",
+ .tokens = {
+ (void *)&cmd_config_rxtx_ring_size_port,
+ (void *)&cmd_config_rxtx_ring_size_config,
+ (void *)&cmd_config_rxtx_ring_size_portid,
+ (void *)&cmd_config_rxtx_ring_size_rxtxq,
+ (void *)&cmd_config_rxtx_ring_size_qid,
+ (void *)&cmd_config_rxtx_ring_size_rsize,
+ (void *)&cmd_config_rxtx_ring_size_size,
+ NULL,
+ },
+};
+
/* *** configure port rxq/txq start/stop *** */
struct cmd_config_rxtx_queue {
cmdline_fixed_string_t port;
@@ -16346,6 +16447,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
+ (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
(cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_config_rss_reta,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 07a43aeeb..e0b159bc6 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1623,6 +1623,15 @@ Close all ports or a specific port::
testpmd> port close (port_id|all)
+port config - queue ring size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure a rx/tx queue ring size::
+
+ testpmd> port (port_id) (rxq|txq) (queue_id) ring_size (value)
+
+Only take effect after command that (re-)start the port or command that setup specific queue.
+
port start/stop queue
~~~~~~~~~~~~~~~~~~~~~
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v7 5/5] net/i40e: enable runtime queue setup
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
` (3 preceding siblings ...)
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure Qi Zhang
@ 2018-04-22 11:58 ` Qi Zhang
2018-04-23 17:45 ` [dpdk-dev] [PATCH v7 0/5] " Ferruh Yigit
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-22 11:58 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
From: Qi Zhang <Qi.z.zhang@intel.com>
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <Qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v7:
- update i40e.ini
v5:
- fix first tx queue check.
v4:
- fix rx/tx conflict check.
- no need conflict check for first rx/tx queue at runtime setup.
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
doc/guides/nics/features/i40e.ini | 2 +
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 +++++++++++++++++++++++++++++++++-----
3 files changed, 166 insertions(+), 23 deletions(-)
diff --git a/doc/guides/nics/features/i40e.ini b/doc/guides/nics/features/i40e.ini
index e862712c9..5fb5bb296 100644
--- a/doc/guides/nics/features/i40e.ini
+++ b/doc/guides/nics/features/i40e.ini
@@ -52,3 +52,5 @@ x86-32 = Y
x86-64 = Y
ARMv8 = Y
Power8 = Y
+Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 180ac7449..e329042df 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3244,6 +3244,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->dev_capa =
+ DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
+ DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index bc660596b..df855ff3a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1706,6 +1706,75 @@ i40e_check_rx_queue_offloads(struct rte_eth_dev *dev, uint64_t requested)
return !((mandatory ^ requested) & supported);
}
+static int
+i40e_dev_first_queue(uint16_t idx, void **queues, int num)
+{
+ uint16_t i;
+
+ for (i = 0; i < num; i++) {
+ if (i != idx && queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_rx_queue *rxq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ int use_scattered_rx =
+ ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size);
+
+ if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(rxq->queue_id,
+ dev->data->rx_queues,
+ dev->data->nb_rx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_rx_function.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->rx_vec_allowed = true;
+ dev->data->scattered_rx = use_scattered_rx;
+ if (use_def_burst_func)
+ ad->rx_bulk_alloc_allowed = false;
+ i40e_set_rx_function(dev);
+ return 0;
+ }
+
+ /* check bulk alloc conflict */
+ if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
+ PMD_DRV_LOG(ERR, "Can't use default burst.");
+ return -EINVAL;
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx && use_scattered_rx) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required.");
+ return -EINVAL;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1834,25 +1903,6 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_rx_queue(rxq);
rxq->q_set = TRUE;
- dev->data->rx_queues[queue_idx] = rxq;
-
- use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq);
-
- if (!use_def_burst_func) {
-#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "satisfied. Rx Burst Bulk Alloc function will be "
- "used on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
-#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
- } else {
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "not satisfied, Scattered Rx is requested, "
- "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
- "not enabled on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
- ad->rx_bulk_alloc_allowed = false;
- }
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -1867,6 +1917,34 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_rx_queue_setup_runtime(dev, rxq)) {
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ } else {
+ use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+ }
+
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
@@ -2012,6 +2090,55 @@ i40e_check_tx_queue_offloads(struct rte_eth_dev *dev, uint64_t requested)
return !((mandatory ^ requested) & supported);
}
+static int
+i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_tx_queue *txq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (i40e_tx_queue_init(txq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(txq->queue_id,
+ dev->data->tx_queues,
+ dev->data->nb_tx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_tx_function.
+ */
+ ad->tx_simple_allowed = true;
+ ad->tx_vec_allowed = true;
+ i40e_set_tx_function_flag(dev, txq);
+ i40e_set_tx_function(dev);
+ return 0;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup.");
+ return -EINVAL;
+ }
+ }
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
+ PMD_DRV_LOG(ERR, "No-simple tx is required.");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
int
i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -2194,10 +2321,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_tx_queue(txq);
txq->q_set = TRUE;
- dev->data->tx_queues[queue_idx] = txq;
-
- /* Use a simple TX queue without offloads or multi segs if possible */
- i40e_set_tx_function_flag(dev, txq);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -2212,6 +2335,20 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_tx_queue_setup_runtime(dev, txq)) {
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ } else {
+ /**
+ * Use a simple TX queue without offloads or
+ * multi segs if possible
+ */
+ i40e_set_tx_function_flag(dev, txq);
+ }
+ dev->data->tx_queues[queue_idx] = txq;
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure Qi Zhang
@ 2018-04-23 17:45 ` Ferruh Yigit
2018-04-24 3:16 ` Zhang, Qi Z
0 siblings, 1 reply; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-23 17:45 UTC (permalink / raw)
To: Qi Zhang, thomas
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/22/2018 12:58 PM, Qi Zhang wrote:
> Add command to change specific queue's ring size configure,
> the new value will only take effect after command that restart
> the device(port stop <port_id>/port start <port_id>) or command
> that setup the queue(port <port_id> rxq <qid> setup) at runtime.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> app/test-pmd/cmdline.c | 102 ++++++++++++++++++++++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 +++
> 2 files changed, 111 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index b50e11e60..22e4d4585 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -846,6 +846,11 @@ static void cmd_help_long_parsed(void *parsed_result,
> "port config mtu X value\n"
> " Set the MTU of port X to a given value\n\n"
>
> + "port config (port_id) (rxq|txq) (queue_id) ring_size (value)\n"
> + " Set a rx/tx queue's ring size configuration, the new"
> + " value will take effect after command that (re-)start the port"
> + " or command that setup the specific queue\n\n"
"port config all rxq|txq|rxd|txd <value>" is used to set number of queues (rxq)
or number of descriptors in queue (rxd).
Problem is this is not flexible and your version is better.
What do you think removing old rxd|txd part with this patch, to prevent duplication?
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/5] ethdev: support runtime queue setup
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 1/5] ethdev: support " Qi Zhang
@ 2018-04-23 17:45 ` Ferruh Yigit
0 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-23 17:45 UTC (permalink / raw)
To: Qi Zhang, thomas
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/22/2018 12:58 PM, Qi Zhang wrote:
> It's not possible to setup a queue when the port is started
> because of a check in ethdev layer. New capability flags are
> added in order to relax this check for devices which support
> queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
> will raise an error only if the port is started and runtime setup
> of queue is not supported.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>
> v7:
> - update default.init
> - rename runtime_queue_setup_capa to dev_capa for generic.
> - fix typo.
>
> v6:
> - fix tx queue state check in rte_eth_tx_queue_setup
>
>
> doc/guides/nics/features.rst | 18 ++++++++++++++++++
> doc/guides/nics/features/default.ini | 2 ++
> lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
> lib/librte_ether/rte_ethdev.h | 7 +++++++
> 4 files changed, 45 insertions(+), 12 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 1b4fb979f..67d459f80 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -892,7 +892,25 @@ Documentation describes performance values.
>
> See ``dpdk.org/doc/perf/*``.
>
> +.. _nic_features_runtime_rx_queue_setup:
>
> +Runtime Rx queue setup
> +----------------------
> +
> +Supports Rx queue setup after device started.
> +
> +* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_RX_QUEUE_SETUP``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
> +
> +.. _nic_features_runtime_tx_queue_setup:
> +
> +Runtime Tx queue setup
> +----------------------
> +
> +Supports Tx queue setup after device started.
> +
> +* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_TX_QUEUE_SETUP``.
> +* **[related] API**: ``rte_eth_dev_info_get()``.
>
> .. _nic_features_other:
>
> diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
> index dae2ad776..dae80d52f 100644
> --- a/doc/guides/nics/features/default.ini
> +++ b/doc/guides/nics/features/default.ini
> @@ -78,3 +78,5 @@ x86-64 =
> Usage doc =
> Design doc =
> Perf doc =
> +Runtime Rx queue setup =
> +Runtime Tx queue setup =
The order of this file is the display order, can you please move these two new
features somewhere close to queue or configuration related features?
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure Qi Zhang
@ 2018-04-23 17:45 ` Ferruh Yigit
0 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-23 17:45 UTC (permalink / raw)
To: Qi Zhang, thomas
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/22/2018 12:58 PM, Qi Zhang wrote:
> Each queue has independent configure information in rte_port.
> Base on this, we are able to add new commands to configure
> different queues with different value.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
<...>
> @@ -1752,30 +1753,41 @@ rxtx_config_display(void)
> nb_fwd_lcores, nb_fwd_ports);
>
> RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf;
> - struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf;
> + struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf[0];
> + struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf[0];
>
> printf(" port %d:\n", (unsigned int)pid);
> printf(" CRC stripping %s\n",
> (ports[pid].dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_CRC_STRIP) ?
> "enabled" : "disabled");
> - printf(" RX queues=%d - RX desc=%d - RX free threshold=%d\n",
> - nb_rxq, nb_rxd, rx_conf->rx_free_thresh);
> - printf(" RX threshold registers: pthresh=%d hthresh=%d "
> - " wthresh=%d\n",
> - rx_conf->rx_thresh.pthresh,
> - rx_conf->rx_thresh.hthresh,
> - rx_conf->rx_thresh.wthresh);
> - printf(" TX queues=%d - TX desc=%d - TX free threshold=%d\n",
> - nb_txq, nb_txd, tx_conf->tx_free_thresh);
> - printf(" TX threshold registers: pthresh=%d hthresh=%d "
> - " wthresh=%d\n",
> - tx_conf->tx_thresh.pthresh,
> - tx_conf->tx_thresh.hthresh,
> - tx_conf->tx_thresh.wthresh);
> - printf(" TX RS bit threshold=%d - TXQ offloads=0x%"PRIx64"\n",
> - tx_conf->tx_rs_thresh, tx_conf->offloads);
> + printf(" RX queues = %d\n", nb_rxq);
> + for (qid = 0; qid < nb_rxq; qid++) {
> + printf(" Queue Index = %d\n", qid);
> + printf(" RX desc=%d - RX free threshold=%d\n",
> + ports[pid].nb_rx_desc[qid],
> + rx_conf[qid].rx_free_thresh);
> + printf(" RX threshold registers: pthresh=%d hthresh=%d "
> + " wthresh=%d\n",
> + rx_conf[qid].rx_thresh.pthresh,
> + rx_conf[qid].rx_thresh.hthresh,
> + rx_conf[qid].rx_thresh.wthresh);
> + }
> + printf(" TX queues = %d\n", nb_txq);
> + for (qid = 0; qid < nb_txq; qid++) {
> + printf(" Queue Index = %d\n", qid);
> + printf(" TX desc=%d - TX free threshold=%d\n",
> + ports[pid].nb_tx_desc[qid],
> + tx_conf[qid].tx_free_thresh);
> + printf(" TX threshold registers: pthresh=%d hthresh=%d "
> + " wthresh=%d\n",
> + tx_conf[qid].tx_thresh.pthresh,
> + tx_conf[qid].tx_thresh.hthresh,
> + tx_conf[qid].tx_thresh.wthresh);
> + printf(" TX RS bit threshold=%d - TXQ offloads=0x%"PRIx64"\n",
> + tx_conf[qid].tx_rs_thresh,
> + tx_conf[qid].offloads);
> + }
This part requires rebase because of recent updates,
this was wrong to display queue specific values as single value, thanks for fixing.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/5] runtime queue setup
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
` (4 preceding siblings ...)
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 5/5] net/i40e: enable runtime queue setup Qi Zhang
@ 2018-04-23 17:45 ` Ferruh Yigit
5 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-23 17:45 UTC (permalink / raw)
To: Qi Zhang, thomas
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/22/2018 12:58 PM, Qi Zhang wrote:
> From: Qi Zhang <Qi.z.zhang@intel.com>
>
> v7:
> - update default.ini and i40e.ini.
> - rename runtime_queue_setup_capa to dev_capa for generic.
> - testpmd queue setup command be moved to "ports" command group.
> - remove ring_size and offload from queue setup command in testpmd.
> - enable per queue config in testpmd.
> - enable queue ring size configure command in testpmd.
> - fix couple typo.
>
> TODO:
> queue offload config commmand is not implemented yet, but per queue
> configure data structure is already supported in PATCH 3
>
> v6:
> - fix tx queue state check in rte_eth_rx_queue_setup
> - fix error message in testpmd.
>
> v5:
> - fix first tx queue check in i40e.
>
> v4:
> - fix i40e rx/tx funciton conflict handle.
> - no need conflict check for first rx/tx queue at runtime setup.
> - fix missing offload paramter in testpmd cmdline.
>
> v3:
> - not overload deferred start.
> - rename deferred setup to runtime setup.
> - remove unecessary testpmd parameters (patch 2/4 of v2)
> - add offload support to testpmd queue setup command line
> - i40e fix: return fail when required rx/tx function conflict with
> exist setup.
>
> v2:
> - enhance comment in rte_ethdev.h
>
> According to exist implementation,rte_eth_[rx|tx]_queue_setup will
> always return fail if device is already started(rte_eth_dev_start).
>
> This can't satisfied the usage when application want to deferred setup
> part of the queues while keep traffic running on those queues already
> be setup.
>
> example:
> rte_eth_dev_config(nb_rxq = 2, nb_txq =2)
> rte_eth_rx_queue_setup(idx = 0 ...)
> rte_eth_rx_queue_setup(idx = 0 ...)
> rte_eth_dev_start(...) /* [rx|tx]_burst is ready to start on queue 0 */
> rte_eth_rx_queue_setup(idx=1 ...) /* fail*/
>
> Basically this is not a general hardware limitation, because for NIC
> like i40e, ixgbe, it is not necessary to stop the whole device before
> configure a fresh queue or reconfigure an exist queue with no traffic
> on it.
>
> The patch let etherdev driver expose the capability flag through
> rte_eth_dev_info_get when it support deferred queue configuraiton,
> then base on these flag, rte_eth_[rx|tx]_queue_setup could decide
> continue to setup the queue or just return fail when device already
> started.
>
>
> Qi Zhang (1):
> net/i40e: enable runtime queue setup
>
> qi Zhang (4):
> ethdev: support runtime queue setup
> app/testpmd: add command for queue setup
> app/testpmd: enable per queue configure
> app/testpmd: enable queue ring size configure
Overall looks good to me. There are a few minor comments/questions on individual
patches.
For series,
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Please feel free to keep the ack for next version of the set based on comments
in patches.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure
2018-04-23 17:45 ` Ferruh Yigit
@ 2018-04-24 3:16 ` Zhang, Qi Z
2018-04-24 11:05 ` Ferruh Yigit
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-24 3:16 UTC (permalink / raw)
To: Yigit, Ferruh, thomas
Cc: Ananyev, Konstantin, dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, April 24, 2018 1:45 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org;
> Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [PATCH v7 4/5] app/testpmd: enable queue ring size configure
>
> On 4/22/2018 12:58 PM, Qi Zhang wrote:
> > Add command to change specific queue's ring size configure, the new
> > value will only take effect after command that restart the device(port
> > stop <port_id>/port start <port_id>) or command that setup the
> > queue(port <port_id> rxq <qid> setup) at runtime.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > app/test-pmd/cmdline.c | 102
> ++++++++++++++++++++++++++++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 +++
> > 2 files changed, 111 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > b50e11e60..22e4d4585 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -846,6 +846,11 @@ static void cmd_help_long_parsed(void
> *parsed_result,
> > "port config mtu X value\n"
> > " Set the MTU of port X to a given value\n\n"
> >
> > + "port config (port_id) (rxq|txq) (queue_id) ring_size (value)\n"
> > + " Set a rx/tx queue's ring size configuration, the new"
> > + " value will take effect after command that (re-)start the port"
> > + " or command that setup the specific queue\n\n"
>
> "port config all rxq|txq|rxd|txd <value>" is used to set number of queues
> (rxq) or number of descriptors in queue (rxd).
>
> Problem is this is not flexible and your version is better.
>
> What do you think removing old rxd|txd part with this patch, to prevent
> duplication?
I'm not sure.
Do we need some command to reset all queue's ring size to default value.
Probably we need to support "all" syntax on new command before consider remove this.
Also per queue config will be reset to original command line parameter by any command that call init_port_config, (for example: port config all rxq ...)
while "port config all rxd ..." modify the command line value, so they are different
Regards
Qi
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure
2018-04-24 3:16 ` Zhang, Qi Z
@ 2018-04-24 11:05 ` Ferruh Yigit
0 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-24 11:05 UTC (permalink / raw)
To: Zhang, Qi Z, thomas
Cc: Ananyev, Konstantin, dev, Xing, Beilei, Wu, Jingjing, Lu, Wenzhuo
On 4/24/2018 4:16 AM, Zhang, Qi Z wrote:
>
>
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Tuesday, April 24, 2018 1:45 AM
>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; thomas@monjalon.net
>> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org;
>> Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Subject: Re: [PATCH v7 4/5] app/testpmd: enable queue ring size configure
>>
>> On 4/22/2018 12:58 PM, Qi Zhang wrote:
>>> Add command to change specific queue's ring size configure, the new
>>> value will only take effect after command that restart the device(port
>>> stop <port_id>/port start <port_id>) or command that setup the
>>> queue(port <port_id> rxq <qid> setup) at runtime.
>>>
>>> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>>> ---
>>> app/test-pmd/cmdline.c | 102
>> ++++++++++++++++++++++++++++
>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 +++
>>> 2 files changed, 111 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>> b50e11e60..22e4d4585 100644
>>> --- a/app/test-pmd/cmdline.c
>>> +++ b/app/test-pmd/cmdline.c
>>> @@ -846,6 +846,11 @@ static void cmd_help_long_parsed(void
>> *parsed_result,
>>> "port config mtu X value\n"
>>> " Set the MTU of port X to a given value\n\n"
>>>
>>> + "port config (port_id) (rxq|txq) (queue_id) ring_size (value)\n"
>>> + " Set a rx/tx queue's ring size configuration, the new"
>>> + " value will take effect after command that (re-)start the port"
>>> + " or command that setup the specific queue\n\n"
>>
>> "port config all rxq|txq|rxd|txd <value>" is used to set number of queues
>> (rxq) or number of descriptors in queue (rxd).
>>
>> Problem is this is not flexible and your version is better.
>>
>> What do you think removing old rxd|txd part with this patch, to prevent
>> duplication?
>
> I'm not sure.
> Do we need some command to reset all queue's ring size to default value.
> Probably we need to support "all" syntax on new command before consider remove this.
My concern was having to multiple commands with different syntax for same
result, if both have different usecase that is OK.
>
> Also per queue config will be reset to original command line parameter by any command that call init_port_config, (for example: port config all rxq ...)
> while "port config all rxd ..." modify the command line value, so they are different
>
> Regards
> Qi
>
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 0/5] runtime queue setup
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
` (9 preceding siblings ...)
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 1/5] ethdev: support " Qi Zhang
` (5 more replies)
10 siblings, 6 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
v8:
- re-order in default.ini and i40e.ini.
- rebase
v7:
- update default.ini and i40e.ini.
- rename runtime_queue_setup_capa to dev_capa for generic.
- testpmd queue setup command be moved to "ports" command group.
- remove ring_size and offload from queue setup command in testpmd.
- enable per queue config in testpmd.
- enable queue ring size configure command in testpmd.
- fix couple typo.
TODO:
queue offload config commmand is not implemented yet, but per queue
configure data structure is already supported in PATCH 3
v6:
- fix tx queue state check in rte_eth_rx_queue_setup
- fix error message in testpmd.
v5:
- fix first tx queue check in i40e.
v4:
- fix i40e rx/tx funciton conflict handle.
- no need conflict check for first rx/tx queue at runtime setup.
- fix missing offload paramter in testpmd cmdline.
v3:
- not overload deferred start.
- rename deferred setup to runtime setup.
- remove unecessary testpmd parameters (patch 2/4 of v2)
- add offload support to testpmd queue setup command line
- i40e fix: return fail when required rx/tx function conflict with
exist setup.
v2:
- enhance comment in rte_ethdev.h
Qi Zhang (5):
ethdev: support runtime queue setup
app/testpmd: add command for queue setup
app/testpmd: enable per queue configure
app/testpmd: enable queue ring size configure
net/i40e: enable runtime queue setup
app/test-pmd/cmdline.c | 217 ++++++++++++++++++++++++++++
app/test-pmd/config.c | 67 ++++++---
app/test-pmd/testpmd.c | 101 ++++++++-----
app/test-pmd/testpmd.h | 6 +-
doc/guides/nics/features.rst | 18 +++
doc/guides/nics/features/default.ini | 2 +
doc/guides/nics/features/i40e.ini | 2 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 16 ++
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 ++++++++++++++++++++---
lib/librte_ether/rte_ethdev.c | 30 ++--
lib/librte_ether/rte_ethdev.h | 7 +
12 files changed, 554 insertions(+), 99 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 1/5] ethdev: support runtime queue setup
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 14:01 ` Thomas Monjalon
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 2/5] app/testpmd: add command for " Qi Zhang
` (4 subsequent siblings)
5 siblings, 1 reply; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu,
Qi Zhang, Qi Zhang
From: Qi Zhang <Qi.z.zhang@intel.com>
It's not possible to setup a queue when the port is started
because of a check in ethdev layer. New capability flags are
added in order to relax this check for devices which support
queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
will raise an error only if the port is started and runtime setup
of queue is not supported.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
v8:
- re-order in default.ini
v7:
- update default.ini
- rename runtime_queue_setup_capa to dev_capa for generic.
- fix typo.
v6:
- fix tx queue state check in rte_eth_tx_queue_setup
doc/guides/nics/features.rst | 18 ++++++++++++++++++
doc/guides/nics/features/default.ini | 2 ++
lib/librte_ether/rte_ethdev.c | 30 ++++++++++++++++++------------
lib/librte_ether/rte_ethdev.h | 7 +++++++
4 files changed, 45 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1b4fb979f..67d459f80 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -892,7 +892,25 @@ Documentation describes performance values.
See ``dpdk.org/doc/perf/*``.
+.. _nic_features_runtime_rx_queue_setup:
+Runtime Rx queue setup
+----------------------
+
+Supports Rx queue setup after device started.
+
+* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_RX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
+
+.. _nic_features_runtime_tx_queue_setup:
+
+Runtime Tx queue setup
+----------------------
+
+Supports Tx queue setup after device started.
+
+* **[provides] rte_eth_dev_info**: ``dev_capa:DEV_CAPA_RUNTIME_TX_QUEUE_SETUP``.
+* **[related] API**: ``rte_eth_dev_info_get()``.
.. _nic_features_other:
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index dae2ad776..2f03c1d0d 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -17,6 +17,8 @@ Lock-free Tx queue =
Fast mbuf free =
Free Tx mbuf on demand =
Queue start/stop =
+Runtime Rx queue setup =
+Runtime Tx queue setup =
MTU update =
Jumbo frame =
Scattered Rx =
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 49e9b83cf..0e503ab7e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1425,12 +1425,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup, -ENOTSUP);
@@ -1482,6 +1476,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.dev_capa &
+ DEV_CAPA_RUNTIME_RX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->rx_queue_state[rx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
rxq = dev->data->rx_queues;
if (rxq[rx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release,
@@ -1557,12 +1560,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
- if (dev->data->dev_started) {
- RTE_PMD_DEBUG_TRACE(
- "port %d must be stopped to allow configuration\n", port_id);
- return -EBUSY;
- }
-
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_setup, -ENOTSUP);
@@ -1587,6 +1584,15 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
return -EINVAL;
}
+ if (dev->data->dev_started &&
+ !(dev_info.dev_capa &
+ DEV_CAPA_RUNTIME_TX_QUEUE_SETUP))
+ return -EBUSY;
+
+ if (dev->data->tx_queue_state[tx_queue_id] !=
+ RTE_ETH_QUEUE_STATE_STOPPED)
+ return -EBUSY;
+
txq = dev->data->tx_queues;
if (txq[tx_queue_id]) {
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8985b718e..4096f688a 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -993,6 +993,11 @@ struct rte_eth_conf {
*/
#define DEV_TX_OFFLOAD_IP_TNL_TSO 0x00080000
+#define DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
+/**< Device supports Rx queue setup after device started*/
+#define DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002
+/**< Device supports Tx queue setup after device started*/
+
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1066,6 +1071,8 @@ struct rte_eth_dev_info {
struct rte_eth_dev_portconf default_rxportconf;
/** Tx parameter recommendations */
struct rte_eth_dev_portconf default_txportconf;
+ /** Generic device capabilities */
+ uint64_t dev_capa;
};
/**
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 2/5] app/testpmd: add command for queue setup
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 1/5] ethdev: support " Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 3/5] app/testpmd: enable per queue configure Qi Zhang
` (3 subsequent siblings)
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add new command to setup queue, rte_eth_[rx|tx]_queue_setup will
be called corresponsively.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
v7:
- remove ring_size and offload paramters and move to "ports" commmand
group.
v6:
- fix error message for rx_free_thresh check.
v5:
- fix command description.
v4:
- fix missing offload in command line.
v3:
- add offload parameter to queue setup command.
- couple code refactory.
app/test-pmd/cmdline.c | 115 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++
2 files changed, 122 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 2ec9d0caa..f248adc38 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -850,6 +850,9 @@ static void cmd_help_long_parsed(void *parsed_result,
" Start/stop a rx/tx queue of port X. Only take effect"
" when port X is started\n\n"
+ "port (port_id) (rxq|txq) (queue_id) setup\n"
+ " Setup a rx/tx queue of port X.\n\n"
+
"port config (port_id|all) l2-tunnel E-tag ether-type"
" (value)\n"
" Set the value of E-tag ether-type.\n\n"
@@ -2287,6 +2290,117 @@ cmdline_parse_inst_t cmd_config_rxtx_queue = {
},
};
+/* *** configure port rxq/txq setup *** */
+struct cmd_setup_rxtx_queue {
+ cmdline_fixed_string_t port;
+ portid_t portid;
+ cmdline_fixed_string_t rxtxq;
+ uint16_t qid;
+ cmdline_fixed_string_t setup;
+};
+
+/* Common CLI fields for queue setup */
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, port, "port");
+cmdline_parse_token_num_t cmd_setup_rxtx_queue_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, portid, UINT16);
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_rxtxq =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, rxtxq, "rxq#txq");
+cmdline_parse_token_num_t cmd_setup_rxtx_queue_qid =
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, qid, UINT16);
+cmdline_parse_token_string_t cmd_setup_rxtx_queue_setup =
+ TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, setup, "setup");
+
+static void
+cmd_setup_rxtx_queue_parsed(
+ void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_setup_rxtx_queue *res = parsed_result;
+ struct rte_port *port;
+ struct rte_mempool *mp;
+ unsigned int socket_id;
+ uint8_t isrx = 0;
+ int ret;
+
+ if (port_id_is_invalid(res->portid, ENABLED_WARN))
+ return;
+
+ if (res->portid == (portid_t)RTE_PORT_ALL) {
+ printf("Invalid port id\n");
+ return;
+ }
+
+ if (!strcmp(res->rxtxq, "rxq"))
+ isrx = 1;
+ else if (!strcmp(res->rxtxq, "txq"))
+ isrx = 0;
+ else {
+ printf("Unknown parameter\n");
+ return;
+ }
+
+ if (isrx && rx_queue_id_is_invalid(res->qid)) {
+ printf("Invalid rx queue\n");
+ return;
+ } else if (!isrx && tx_queue_id_is_invalid(res->qid)) {
+ printf("Invalid tx queue\n");
+ return;
+ }
+
+ port = &ports[res->portid];
+ if (isrx) {
+ socket_id = rxring_numa[res->portid];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ mp = mbuf_pool_find(socket_id);
+ if (mp == NULL) {
+ printf("Failed to setup RX queue: "
+ "No mempool allocation"
+ " on the socket %d\n",
+ rxring_numa[res->portid]);
+ return;
+ }
+ ret = rte_eth_rx_queue_setup(res->portid,
+ res->qid,
+ nb_rxd,
+ socket_id,
+ &port->rx_conf,
+ mp);
+ if (ret)
+ printf("Failed to setup RX queue\n");
+ } else {
+ socket_id = txring_numa[res->portid];
+ if (!numa_support || socket_id == NUMA_NO_CONFIG)
+ socket_id = port->socket_id;
+
+ ret = rte_eth_tx_queue_setup(res->portid,
+ res->qid,
+ nb_txd,
+ socket_id,
+ &port->tx_conf);
+ if (ret)
+ printf("Failed to setup TX queue\n");
+ }
+}
+
+cmdline_parse_inst_t cmd_setup_rxtx_queue = {
+ .f = cmd_setup_rxtx_queue_parsed,
+ .data = NULL,
+ .help_str = "port <port_id> rxq|txq <queue_idx> setup",
+ .tokens = {
+ (void *)&cmd_setup_rxtx_queue_port,
+ (void *)&cmd_setup_rxtx_queue_portid,
+ (void *)&cmd_setup_rxtx_queue_rxtxq,
+ (void *)&cmd_setup_rxtx_queue_qid,
+ (void *)&cmd_setup_rxtx_queue_setup,
+ NULL,
+ },
+};
+
+
/* *** Configure RSS RETA *** */
struct cmd_config_rss_reta {
cmdline_fixed_string_t port;
@@ -16248,6 +16362,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
+ (cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_config_rss_reta,
(cmdline_parse_inst_t *)&cmd_showport_reta,
(cmdline_parse_inst_t *)&cmd_config_burst,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 593b13a3d..065ed49e8 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1630,6 +1630,13 @@ Start/stop a rx/tx queue on a specific port::
testpmd> port (port_id) (rxq|txq) (queue_id) (start|stop)
+port setup queue
+~~~~~~~~~~~~~~~~~~~~~
+
+Setup a rx/tx queue on a specific port::
+
+ testpmd> port (port_id) (rxq|txq) (queue_id) setup
+
Only take effect when port is started.
port config - speed
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 3/5] app/testpmd: enable per queue configure
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 1/5] ethdev: support " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 2/5] app/testpmd: add command for " Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 4/5] app/testpmd: enable queue ring size configure Qi Zhang
` (2 subsequent siblings)
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Each queue has independent configure information in rte_port.
Base on this, we are able to add new commands to configure
different queues with different value.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/cmdline.c | 8 ++--
app/test-pmd/config.c | 67 +++++++++++++++++++++-----------
app/test-pmd/testpmd.c | 101 ++++++++++++++++++++++++++++++-------------------
app/test-pmd/testpmd.h | 6 ++-
4 files changed, 114 insertions(+), 68 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f248adc38..7066109c2 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2365,9 +2365,9 @@ cmd_setup_rxtx_queue_parsed(
}
ret = rte_eth_rx_queue_setup(res->portid,
res->qid,
- nb_rxd,
+ port->nb_rx_desc[res->qid],
socket_id,
- &port->rx_conf,
+ &port->rx_conf[res->qid],
mp);
if (ret)
printf("Failed to setup RX queue\n");
@@ -2378,9 +2378,9 @@ cmd_setup_rxtx_queue_parsed(
ret = rte_eth_tx_queue_setup(res->portid,
res->qid,
- nb_txd,
+ port->nb_tx_desc[res->qid],
socket_id,
- &port->tx_conf);
+ &port->tx_conf[res->qid]);
if (ret)
printf("Failed to setup TX queue\n");
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d98c08254..216a7eb4e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1755,6 +1755,7 @@ void
rxtx_config_display(void)
{
portid_t pid;
+ queueid_t qid;
printf(" %s packet forwarding%s packets/burst=%d\n",
cur_fwd_eng->fwd_mode_name,
@@ -1769,31 +1770,51 @@ rxtx_config_display(void)
nb_fwd_lcores, nb_fwd_ports);
RTE_ETH_FOREACH_DEV(pid) {
- struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf;
- struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf;
+ struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf[0];
+ struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf[0];
+ uint16_t *nb_rx_desc = &ports[pid].nb_rx_desc[0];
+ uint16_t *nb_tx_desc = &ports[pid].nb_tx_desc[0];
+ /* per port config */
printf(" port %d:\n", (unsigned int)pid);
- printf(" RX queues=%d - RX desc=%d - RX free threshold=%d\n",
- nb_rxq, nb_rxd, rx_conf->rx_free_thresh);
- printf(" RX threshold registers: pthresh=%d hthresh=%d "
- " wthresh=%d\n",
- rx_conf->rx_thresh.pthresh,
- rx_conf->rx_thresh.hthresh,
- rx_conf->rx_thresh.wthresh);
- printf(" Rx offloads=0x%"PRIx64" RXQ offloads=0x%"PRIx64"\n",
- ports[pid].dev_conf.rxmode.offloads,
- rx_conf->offloads);
- printf(" TX queues=%d - TX desc=%d - TX free threshold=%d\n",
- nb_txq, nb_txd, tx_conf->tx_free_thresh);
- printf(" TX threshold registers: pthresh=%d hthresh=%d "
- " wthresh=%d\n",
- tx_conf->tx_thresh.pthresh,
- tx_conf->tx_thresh.hthresh,
- tx_conf->tx_thresh.wthresh);
- printf(" TX RS bit threshold=%d\n", tx_conf->tx_rs_thresh);
- printf(" Tx offloads=0x%"PRIx64" TXQ offloads=0x%"PRIx64"\n",
- ports[pid].dev_conf.txmode.offloads,
- tx_conf->offloads);
+ printf(" Rx offloads=0x%"PRIx64"\n",
+ ports[pid].dev_conf.rxmode.offloads);
+
+ printf(" Tx offloads=0x%"PRIx64"\n",
+ ports[pid].dev_conf.txmode.offloads);
+
+ printf(" RX queue number: %d\n", nb_rxq);
+
+ /* per rx queue config */
+ for (qid = 0; qid < nb_rxq; qid++) {
+ printf(" RX queue: %d\n", qid);
+ printf(" RX desc=%d - RX free threshold=%d\n",
+ nb_rx_desc[qid], rx_conf[qid].rx_free_thresh);
+ printf(" RX threshold registers: pthresh=%d hthresh=%d "
+ " wthresh=%d\n",
+ rx_conf[qid].rx_thresh.pthresh,
+ rx_conf[qid].rx_thresh.hthresh,
+ rx_conf[qid].rx_thresh.wthresh);
+ printf(" RX Offloads=0x%"PRIx64"\n",
+ rx_conf[qid].offloads);
+ }
+
+ printf(" Tx queue number: %d\n", nb_txq);
+
+ /* per tx queue config */
+ for (qid = 0; qid < nb_txq; qid++) {
+ printf(" TX queue: %d\n", qid);
+ printf(" TX desc=%d - TX free threshold=%d\n",
+ nb_tx_desc[qid], tx_conf[qid].tx_free_thresh);
+ printf(" TX threshold registers: pthresh=%d hthresh=%d "
+ " wthresh=%d\n",
+ tx_conf[qid].tx_thresh.pthresh,
+ tx_conf[qid].tx_thresh.hthresh,
+ tx_conf[qid].tx_thresh.wthresh);
+ printf(" TX RS bit threshold=%d\n", tx_conf->tx_rs_thresh);
+ printf(" TX offloads=0x%"PRIx64"\n",
+ tx_conf[qid].offloads);
+ }
}
}
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d6da41927..f9b637ba8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1593,20 +1593,24 @@ start_port(portid_t pid)
}
if (port->need_reconfig_queues > 0) {
port->need_reconfig_queues = 0;
- port->tx_conf.txq_flags = ETH_TXQ_FLAGS_IGNORE;
- /* Apply Tx offloads configuration */
- port->tx_conf.offloads = port->dev_conf.txmode.offloads;
/* setup tx queues */
for (qi = 0; qi < nb_txq; qi++) {
+ port->tx_conf[qi].txq_flags =
+ ETH_TXQ_FLAGS_IGNORE;
+ /* Apply Tx offloads configuration */
+ port->tx_conf[qi].offloads =
+ port->dev_conf.txmode.offloads;
if ((numa_support) &&
(txring_numa[pi] != NUMA_NO_CONFIG))
diag = rte_eth_tx_queue_setup(pi, qi,
- nb_txd,txring_numa[pi],
- &(port->tx_conf));
+ port->nb_tx_desc[qi],
+ txring_numa[pi],
+ &(port->tx_conf[qi]));
else
diag = rte_eth_tx_queue_setup(pi, qi,
- nb_txd,port->socket_id,
- &(port->tx_conf));
+ port->nb_tx_desc[qi],
+ port->socket_id,
+ &(port->tx_conf[qi]));
if (diag == 0)
continue;
@@ -1617,15 +1621,17 @@ start_port(portid_t pid)
RTE_PORT_STOPPED) == 0)
printf("Port %d can not be set back "
"to stopped\n", pi);
- printf("Fail to configure port %d tx queues\n", pi);
+ printf("Fail to configure port %d tx queues\n",
+ pi);
/* try to reconfigure queues next time */
port->need_reconfig_queues = 1;
return -1;
}
- /* Apply Rx offloads configuration */
- port->rx_conf.offloads = port->dev_conf.rxmode.offloads;
- /* setup rx queues */
for (qi = 0; qi < nb_rxq; qi++) {
+ /* Apply Rx offloads configuration */
+ port->rx_conf[qi].offloads =
+ port->dev_conf.rxmode.offloads;
+ /* setup rx queues */
if ((numa_support) &&
(rxring_numa[pi] != NUMA_NO_CONFIG)) {
struct rte_mempool * mp =
@@ -1639,8 +1645,10 @@ start_port(portid_t pid)
}
diag = rte_eth_rx_queue_setup(pi, qi,
- nb_rxd,rxring_numa[pi],
- &(port->rx_conf),mp);
+ port->nb_rx_desc[pi],
+ rxring_numa[pi],
+ &(port->rx_conf[qi]),
+ mp);
} else {
struct rte_mempool *mp =
mbuf_pool_find(port->socket_id);
@@ -1652,8 +1660,10 @@ start_port(portid_t pid)
return -1;
}
diag = rte_eth_rx_queue_setup(pi, qi,
- nb_rxd,port->socket_id,
- &(port->rx_conf), mp);
+ port->nb_rx_desc[pi],
+ port->socket_id,
+ &(port->rx_conf[qi]),
+ mp);
}
if (diag == 0)
continue;
@@ -1664,7 +1674,8 @@ start_port(portid_t pid)
RTE_PORT_STOPPED) == 0)
printf("Port %d can not be set back "
"to stopped\n", pi);
- printf("Fail to configure port %d rx queues\n", pi);
+ printf("Fail to configure port %d rx queues\n",
+ pi);
/* try to reconfigure queues next time */
port->need_reconfig_queues = 1;
return -1;
@@ -2225,39 +2236,51 @@ map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
static void
rxtx_port_config(struct rte_port *port)
{
- port->rx_conf = port->dev_info.default_rxconf;
- port->tx_conf = port->dev_info.default_txconf;
+ uint16_t qid;
- /* Check if any RX/TX parameters have been passed */
- if (rx_pthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.pthresh = rx_pthresh;
+ for (qid = 0; qid < nb_rxq; qid++) {
+ port->rx_conf[qid] = port->dev_info.default_rxconf;
- if (rx_hthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.hthresh = rx_hthresh;
+ /* Check if any Rx parameters have been passed */
+ if (rx_pthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.pthresh = rx_pthresh;
- if (rx_wthresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_thresh.wthresh = rx_wthresh;
+ if (rx_hthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.hthresh = rx_hthresh;
- if (rx_free_thresh != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_free_thresh = rx_free_thresh;
+ if (rx_wthresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_thresh.wthresh = rx_wthresh;
- if (rx_drop_en != RTE_PMD_PARAM_UNSET)
- port->rx_conf.rx_drop_en = rx_drop_en;
+ if (rx_free_thresh != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_free_thresh = rx_free_thresh;
- if (tx_pthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.pthresh = tx_pthresh;
+ if (rx_drop_en != RTE_PMD_PARAM_UNSET)
+ port->rx_conf[qid].rx_drop_en = rx_drop_en;
- if (tx_hthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.hthresh = tx_hthresh;
+ port->nb_rx_desc[qid] = nb_rxd;
+ }
+
+ for (qid = 0; qid < nb_txq; qid++) {
+ port->tx_conf[qid] = port->dev_info.default_txconf;
+
+ /* Check if any Tx parameters have been passed */
+ if (tx_pthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.pthresh = tx_pthresh;
- if (tx_wthresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_thresh.wthresh = tx_wthresh;
+ if (tx_hthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.hthresh = tx_hthresh;
- if (tx_rs_thresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_rs_thresh = tx_rs_thresh;
+ if (tx_wthresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_thresh.wthresh = tx_wthresh;
- if (tx_free_thresh != RTE_PMD_PARAM_UNSET)
- port->tx_conf.tx_free_thresh = tx_free_thresh;
+ if (tx_rs_thresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_rs_thresh = tx_rs_thresh;
+
+ if (tx_free_thresh != RTE_PMD_PARAM_UNSET)
+ port->tx_conf[qid].tx_free_thresh = tx_free_thresh;
+
+ port->nb_tx_desc[qid] = nb_txd;
+ }
}
void
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 070919822..6f6eada66 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -194,8 +194,10 @@ struct rte_port {
uint8_t need_reconfig_queues; /**< need reconfiguring queues or not */
uint8_t rss_flag; /**< enable rss or not */
uint8_t dcb_flag; /**< enable dcb */
- struct rte_eth_rxconf rx_conf; /**< rx configuration */
- struct rte_eth_txconf tx_conf; /**< tx configuration */
+ uint16_t nb_rx_desc[MAX_QUEUE_ID+1]; /**< per queue rx desc number */
+ uint16_t nb_tx_desc[MAX_QUEUE_ID+1]; /**< per queue tx desc number */
+ struct rte_eth_rxconf rx_conf[MAX_QUEUE_ID+1]; /**< per queue rx configuration */
+ struct rte_eth_txconf tx_conf[MAX_QUEUE_ID+1]; /**< per queue tx configuration */
struct ether_addr *mc_addr_pool; /**< pool of multicast addrs */
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
uint8_t slave_flag; /**< bonding slave port */
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 4/5] app/testpmd: enable queue ring size configure
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
` (2 preceding siblings ...)
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 3/5] app/testpmd: enable per queue configure Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 5/5] net/i40e: enable runtime queue setup Qi Zhang
2018-04-24 14:50 ` [dpdk-dev] [PATCH v8 0/5] " Ferruh Yigit
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
Add command to change specific queue's ring size configure,
the new value will only take effect after command that restart
the device(port stop <port_id>/port start <port_id>) or command
that setup the queue(port <port_id> rxq <qid> setup) at runtime.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/cmdline.c | 102 ++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 +++
2 files changed, 111 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 7066109c2..9e7f744b7 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -846,6 +846,11 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config mtu X value\n"
" Set the MTU of port X to a given value\n\n"
+ "port config (port_id) (rxq|txq) (queue_id) ring_size (value)\n"
+ " Set a rx/tx queue's ring size configuration, the new"
+ " value will take effect after command that (re-)start the port"
+ " or command that setup the specific queue\n\n"
+
"port (port_id) (rxq|txq) (queue_id) (start|stop)\n"
" Start/stop a rx/tx queue of port X. Only take effect"
" when port X is started\n\n"
@@ -2196,6 +2201,102 @@ cmdline_parse_inst_t cmd_config_rss_hash_key = {
},
};
+/* *** configure port rxq/txq ring size *** */
+struct cmd_config_rxtx_ring_size {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t config;
+ portid_t portid;
+ cmdline_fixed_string_t rxtxq;
+ uint16_t qid;
+ cmdline_fixed_string_t rsize;
+ uint16_t size;
+};
+
+static void
+cmd_config_rxtx_ring_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_rxtx_ring_size *res = parsed_result;
+ struct rte_port *port;
+ uint8_t isrx;
+
+ if (port_id_is_invalid(res->portid, ENABLED_WARN))
+ return;
+
+ if (res->portid == (portid_t)RTE_PORT_ALL) {
+ printf("Invalid port id\n");
+ return;
+ }
+
+ port = &ports[res->portid];
+
+ if (!strcmp(res->rxtxq, "rxq"))
+ isrx = 1;
+ else if (!strcmp(res->rxtxq, "txq"))
+ isrx = 0;
+ else {
+ printf("Unknown parameter\n");
+ return;
+ }
+
+ if (isrx && rx_queue_id_is_invalid(res->qid))
+ return;
+ else if (!isrx && tx_queue_id_is_invalid(res->qid))
+ return;
+
+ if (isrx && res->size != 0 && res->size <= rx_free_thresh) {
+ printf("Invalid rx ring_size, must > rx_free_thresh: %d\n",
+ rx_free_thresh);
+ return;
+ }
+
+ if (isrx)
+ port->nb_rx_desc[res->qid] = res->size;
+ else
+ port->nb_tx_desc[res->qid] = res->size;
+
+ cmd_reconfig_device_queue(res->portid, 0, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_config =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ config, "config");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ portid, UINT16);
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rxtxq =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ rxtxq, "rxq#txq");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_qid =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ qid, UINT16);
+cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rsize =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ rsize, "ring_size");
+cmdline_parse_token_num_t cmd_config_rxtx_ring_size_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
+ size, UINT16);
+
+cmdline_parse_inst_t cmd_config_rxtx_ring_size = {
+ .f = cmd_config_rxtx_ring_size_parsed,
+ .data = NULL,
+ .help_str = "port config <port_id> rxq|txq <queue_id> ring_size <value>",
+ .tokens = {
+ (void *)&cmd_config_rxtx_ring_size_port,
+ (void *)&cmd_config_rxtx_ring_size_config,
+ (void *)&cmd_config_rxtx_ring_size_portid,
+ (void *)&cmd_config_rxtx_ring_size_rxtxq,
+ (void *)&cmd_config_rxtx_ring_size_qid,
+ (void *)&cmd_config_rxtx_ring_size_rsize,
+ (void *)&cmd_config_rxtx_ring_size_size,
+ NULL,
+ },
+};
+
/* *** configure port rxq/txq start/stop *** */
struct cmd_config_rxtx_queue {
cmdline_fixed_string_t port;
@@ -16361,6 +16462,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
+ (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
(cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
(cmdline_parse_inst_t *)&cmd_config_rss_reta,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 065ed49e8..3f2f676e9 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1623,6 +1623,15 @@ Close all ports or a specific port::
testpmd> port close (port_id|all)
+port config - queue ring size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure a rx/tx queue ring size::
+
+ testpmd> port (port_id) (rxq|txq) (queue_id) ring_size (value)
+
+Only take effect after command that (re-)start the port or command that setup specific queue.
+
port start/stop queue
~~~~~~~~~~~~~~~~~~~~~
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* [dpdk-dev] [PATCH v8 5/5] net/i40e: enable runtime queue setup
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
` (3 preceding siblings ...)
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 4/5] app/testpmd: enable queue ring size configure Qi Zhang
@ 2018-04-24 12:44 ` Qi Zhang
2018-04-24 14:50 ` [dpdk-dev] [PATCH v8 0/5] " Ferruh Yigit
5 siblings, 0 replies; 95+ messages in thread
From: Qi Zhang @ 2018-04-24 12:44 UTC (permalink / raw)
To: thomas, ferruh.yigit
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu, Qi Zhang
From: Qi Zhang <Qi.z.zhang@intel.com>
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <Qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
v8:
- re-order in i40e.ini
v7:
- update i40e.ini
v5:
- fix first tx queue check.
v4:
- fix rx/tx conflict check.
- no need conflict check for first rx/tx queue at runtime setup.
v3:
- no queue start/stop in setup/release
- return fail when required rx/tx function conflict with
exist setup
doc/guides/nics/features/i40e.ini | 2 +
drivers/net/i40e/i40e_ethdev.c | 4 +
drivers/net/i40e/i40e_rxtx.c | 183 +++++++++++++++++++++++++++++++++-----
3 files changed, 166 insertions(+), 23 deletions(-)
diff --git a/doc/guides/nics/features/i40e.ini b/doc/guides/nics/features/i40e.ini
index e862712c9..02087bcbd 100644
--- a/doc/guides/nics/features/i40e.ini
+++ b/doc/guides/nics/features/i40e.ini
@@ -9,6 +9,8 @@ Link status = Y
Link status event = Y
Rx interrupt = Y
Queue start/stop = Y
+Runtime Rx queue setup = Y
+Runtime Tx queue setup = Y
Jumbo frame = Y
Scattered Rx = Y
TSO = Y
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 180ac7449..e329042df 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3244,6 +3244,10 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_GRE_TNL_TSO |
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+ dev_info->dev_capa =
+ DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
+ DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
dev_info->reta_size = pf->hash_lut_size;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index bc660596b..df855ff3a 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1706,6 +1706,75 @@ i40e_check_rx_queue_offloads(struct rte_eth_dev *dev, uint64_t requested)
return !((mandatory ^ requested) & supported);
}
+static int
+i40e_dev_first_queue(uint16_t idx, void **queues, int num)
+{
+ uint16_t i;
+
+ for (i = 0; i < num; i++) {
+ if (i != idx && queues[i])
+ return 0;
+ }
+
+ return 1;
+}
+
+static int
+i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_rx_queue *rxq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ int use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ uint16_t buf_size =
+ (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
+ RTE_PKTMBUF_HEADROOM);
+ int use_scattered_rx =
+ ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size);
+
+ if (i40e_rx_queue_init(rxq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do RX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(rxq->queue_id,
+ dev->data->rx_queues,
+ dev->data->nb_rx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_rx_function.
+ */
+ ad->rx_bulk_alloc_allowed = true;
+ ad->rx_vec_allowed = true;
+ dev->data->scattered_rx = use_scattered_rx;
+ if (use_def_burst_func)
+ ad->rx_bulk_alloc_allowed = false;
+ i40e_set_rx_function(dev);
+ return 0;
+ }
+
+ /* check bulk alloc conflict */
+ if (ad->rx_bulk_alloc_allowed && use_def_burst_func) {
+ PMD_DRV_LOG(ERR, "Can't use default burst.");
+ return -EINVAL;
+ }
+ /* check scatterred conflict */
+ if (!dev->data->scattered_rx && use_scattered_rx) {
+ PMD_DRV_LOG(ERR, "Scattered rx is required.");
+ return -EINVAL;
+ }
+ /* check vector conflict */
+ if (ad->rx_vec_allowed && i40e_rxq_vec_setup(rxq)) {
+ PMD_DRV_LOG(ERR, "Failed vector rx setup.");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1834,25 +1903,6 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_rx_queue(rxq);
rxq->q_set = TRUE;
- dev->data->rx_queues[queue_idx] = rxq;
-
- use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq);
-
- if (!use_def_burst_func) {
-#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "satisfied. Rx Burst Bulk Alloc function will be "
- "used on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
-#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
- } else {
- PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
- "not satisfied, Scattered Rx is requested, "
- "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
- "not enabled on port=%d, queue=%d.",
- rxq->port_id, rxq->queue_id);
- ad->rx_bulk_alloc_allowed = false;
- }
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -1867,6 +1917,34 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_rx_queue_setup_runtime(dev, rxq)) {
+ i40e_dev_rx_queue_release(rxq);
+ return -EINVAL;
+ }
+ } else {
+ use_def_burst_func =
+ check_rx_burst_bulk_alloc_preconditions(rxq);
+ if (!use_def_burst_func) {
+#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function will be "
+ "used on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
+ } else {
+ PMD_INIT_LOG(DEBUG,
+ "Rx Burst Bulk Alloc Preconditions are "
+ "not satisfied, Scattered Rx is requested, "
+ "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is "
+ "not enabled on port=%d, queue=%d.",
+ rxq->port_id, rxq->queue_id);
+ ad->rx_bulk_alloc_allowed = false;
+ }
+ }
+
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
@@ -2012,6 +2090,55 @@ i40e_check_tx_queue_offloads(struct rte_eth_dev *dev, uint64_t requested)
return !((mandatory ^ requested) & supported);
}
+static int
+i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
+ struct i40e_tx_queue *txq)
+{
+ struct i40e_adapter *ad =
+ I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+ if (i40e_tx_queue_init(txq) != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR,
+ "Failed to do TX queue initialization");
+ return -EINVAL;
+ }
+
+ if (i40e_dev_first_queue(txq->queue_id,
+ dev->data->tx_queues,
+ dev->data->nb_tx_queues)) {
+ /**
+ * If it is the first queue to setup,
+ * set all flags to default and call
+ * i40e_set_tx_function.
+ */
+ ad->tx_simple_allowed = true;
+ ad->tx_vec_allowed = true;
+ i40e_set_tx_function_flag(dev, txq);
+ i40e_set_tx_function(dev);
+ return 0;
+ }
+
+ /* check vector conflict */
+ if (ad->tx_vec_allowed) {
+ if (txq->tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ ||
+ i40e_txq_vec_setup(txq)) {
+ PMD_DRV_LOG(ERR, "Failed vector tx setup.");
+ return -EINVAL;
+ }
+ }
+ /* check simple tx conflict */
+ if (ad->tx_simple_allowed) {
+ if (((txq->txq_flags & I40E_SIMPLE_FLAGS) !=
+ I40E_SIMPLE_FLAGS) ||
+ txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
+ PMD_DRV_LOG(ERR, "No-simple tx is required.");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
int
i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -2194,10 +2321,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
i40e_reset_tx_queue(txq);
txq->q_set = TRUE;
- dev->data->tx_queues[queue_idx] = txq;
-
- /* Use a simple TX queue without offloads or multi segs if possible */
- i40e_set_tx_function_flag(dev, txq);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (!(vsi->enabled_tc & (1 << i)))
@@ -2212,6 +2335,20 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->dcb_tc = i;
}
+ if (dev->data->dev_started) {
+ if (i40e_dev_tx_queue_setup_runtime(dev, txq)) {
+ i40e_dev_tx_queue_release(txq);
+ return -EINVAL;
+ }
+ } else {
+ /**
+ * Use a simple TX queue without offloads or
+ * multi segs if possible
+ */
+ i40e_set_tx_function_flag(dev, txq);
+ }
+ dev->data->tx_queues[queue_idx] = txq;
+
return 0;
}
--
2.13.6
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/5] ethdev: support runtime queue setup
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 1/5] ethdev: support " Qi Zhang
@ 2018-04-24 14:01 ` Thomas Monjalon
0 siblings, 0 replies; 95+ messages in thread
From: Thomas Monjalon @ 2018-04-24 14:01 UTC (permalink / raw)
To: Qi Zhang
Cc: ferruh.yigit, konstantin.ananyev, dev, beilei.xing, jingjing.wu,
wenzhuo.lu, Qi Zhang
24/04/2018 14:44, Qi Zhang:
> From: Qi Zhang <Qi.z.zhang@intel.com>
>
> It's not possible to setup a queue when the port is started
> because of a check in ethdev layer. New capability flags are
> added in order to relax this check for devices which support
> queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
> will raise an error only if the port is started and runtime setup
> of queue is not supported.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/5] runtime queue setup
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
` (4 preceding siblings ...)
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 5/5] net/i40e: enable runtime queue setup Qi Zhang
@ 2018-04-24 14:50 ` Ferruh Yigit
5 siblings, 0 replies; 95+ messages in thread
From: Ferruh Yigit @ 2018-04-24 14:50 UTC (permalink / raw)
To: Qi Zhang, thomas
Cc: konstantin.ananyev, dev, beilei.xing, jingjing.wu, wenzhuo.lu
On 4/24/2018 1:44 PM, Qi Zhang wrote:
> v8:
> - re-order in default.ini and i40e.ini.
> - rebase
>
> v7:
> - update default.ini and i40e.ini.
> - rename runtime_queue_setup_capa to dev_capa for generic.
> - testpmd queue setup command be moved to "ports" command group.
> - remove ring_size and offload from queue setup command in testpmd.
> - enable per queue config in testpmd.
> - enable queue ring size configure command in testpmd.
> - fix couple typo.
>
> TODO:
> queue offload config commmand is not implemented yet, but per queue
> configure data structure is already supported in PATCH 3
>
> v6:
> - fix tx queue state check in rte_eth_rx_queue_setup
> - fix error message in testpmd.
>
> v5:
> - fix first tx queue check in i40e.
>
> v4:
> - fix i40e rx/tx funciton conflict handle.
> - no need conflict check for first rx/tx queue at runtime setup.
> - fix missing offload paramter in testpmd cmdline.
>
> v3:
> - not overload deferred start.
> - rename deferred setup to runtime setup.
> - remove unecessary testpmd parameters (patch 2/4 of v2)
> - add offload support to testpmd queue setup command line
> - i40e fix: return fail when required rx/tx function conflict with
> exist setup.
>
> v2:
> - enhance comment in rte_ethdev.h
>
> Qi Zhang (5):
> ethdev: support runtime queue setup
> app/testpmd: add command for queue setup
> app/testpmd: enable per queue configure
> app/testpmd: enable queue ring size configure
> net/i40e: enable runtime queue setup
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-10 13:59 ` Thomas Monjalon
2018-04-20 11:14 ` Ferruh Yigit
@ 2018-04-24 19:36 ` Thomas Monjalon
2018-04-25 5:33 ` Zhang, Qi Z
1 sibling, 1 reply; 95+ messages in thread
From: Thomas Monjalon @ 2018-04-24 19:36 UTC (permalink / raw)
To: Qi Zhang
Cc: dev, konstantin.ananyev, beilei.xing, jingjing.wu, wenzhuo.lu,
ferruh.yigit, declan.doherty
10/04/2018 15:59, Thomas Monjalon:
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001
> > +/**< Deferred setup rx queue */
> > +#define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
> > +/**< Deferred setup tx queue */
>
> Please use RTE_ETH_ prefix.
Qi, you missed this comment.
It must be fixed by a new patch, please.
And the field must mention the related flags prefix.
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-24 19:36 ` Thomas Monjalon
@ 2018-04-25 5:33 ` Zhang, Qi Z
2018-04-25 7:54 ` Thomas Monjalon
0 siblings, 1 reply; 95+ messages in thread
From: Zhang, Qi Z @ 2018-04-25 5:33 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Ananyev, Konstantin, Xing, Beilei, Wu, Jingjing, Lu,
Wenzhuo, Yigit, Ferruh, Doherty, Declan
Hi Thomas:
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, April 25, 2018 3:36 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Doherty, Declan <declan.doherty@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
>
> 10/04/2018 15:59, Thomas Monjalon:
> > > --- a/lib/librte_ether/rte_ethdev.h
> > > +++ b/lib/librte_ether/rte_ethdev.h
> > > +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001 /**< Deferred
> setup
> > > +rx queue */ #define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
> /**<
> > > +Deferred setup tx queue */
> >
> > Please use RTE_ETH_ prefix.
Actually I saw all the offload flag is started with DEV_, so
Should I still need rename to RTE_ETH_DEV_CAPA_***?
>
> Qi, you missed this comment.
> It must be fixed by a new patch, please.
>
> And the field must mention the related flags prefix.
OK I will add this.
Regards
Qi
^ permalink raw reply [flat|nested] 95+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/3] ether: support runtime queue setup
2018-04-25 5:33 ` Zhang, Qi Z
@ 2018-04-25 7:54 ` Thomas Monjalon
0 siblings, 0 replies; 95+ messages in thread
From: Thomas Monjalon @ 2018-04-25 7:54 UTC (permalink / raw)
To: Zhang, Qi Z
Cc: dev, Ananyev, Konstantin, Xing, Beilei, Wu, Jingjing, Lu,
Wenzhuo, Yigit, Ferruh, Doherty, Declan
25/04/2018 07:33, Zhang, Qi Z:
> > 10/04/2018 15:59, Thomas Monjalon:
> > > > --- a/lib/librte_ether/rte_ethdev.h
> > > > +++ b/lib/librte_ether/rte_ethdev.h
> > > > +#define DEV_RUNTIME_RX_QUEUE_SETUP 0x00000001 /**< Deferred
> > setup
> > > > +rx queue */ #define DEV_RUNTIME_TX_QUEUE_SETUP 0x00000002
> > /**<
> > > > +Deferred setup tx queue */
> > >
> > > Please use RTE_ETH_ prefix.
>
> Actually I saw all the offload flag is started with DEV_, so
> Should I still need rename to RTE_ETH_DEV_CAPA_***?
Yes
The flags starting with DEV_ will be managed later because it is hard
to rename an existing flag. It is in my roadmap.
The idea is to not add new flags with wrong namespace prefix.
^ permalink raw reply [flat|nested] 95+ messages in thread
end of thread, other threads:[~2018-04-25 7:54 UTC | newest]
Thread overview: 95+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-12 4:53 [dpdk-dev] [PATCH 0/4] deferred queue setup Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 1/4] ether: support " Qi Zhang
2018-02-12 13:55 ` Thomas Monjalon
2018-02-12 4:53 ` [dpdk-dev] [PATCH 2/4] app/testpmd: add parameters for " Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 3/4] app/testpmd: add command for " Qi Zhang
2018-02-12 4:53 ` [dpdk-dev] [PATCH 4/4] net/i40e: enable deferred " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 0/4] " Qi Zhang
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 1/4] ether: support " Qi Zhang
2018-03-14 12:31 ` Ananyev, Konstantin
2018-03-15 3:13 ` Zhang, Qi Z
2018-03-15 13:16 ` Ananyev, Konstantin
2018-03-15 15:08 ` Zhang, Qi Z
2018-03-15 15:38 ` Ananyev, Konstantin
2018-03-16 0:42 ` Zhang, Qi Z
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 2/4] app/testpmd: add parameters for " Qi Zhang
2018-03-14 17:38 ` Ananyev, Konstantin
2018-03-15 3:58 ` Zhang, Qi Z
2018-03-15 13:42 ` Ananyev, Konstantin
2018-03-15 14:31 ` Zhang, Qi Z
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 3/4] app/testpmd: add command for " Qi Zhang
2018-03-14 17:36 ` Ananyev, Konstantin
2018-03-14 17:41 ` Ananyev, Konstantin
2018-03-15 3:59 ` Zhang, Qi Z
2018-03-02 4:13 ` [dpdk-dev] [PATCH v2 4/4] net/i40e: enable deferred " Qi Zhang
2018-03-14 12:35 ` Ananyev, Konstantin
2018-03-15 3:22 ` Zhang, Qi Z
2018-03-15 3:50 ` Zhang, Qi Z
2018-03-15 13:22 ` Ananyev, Konstantin
2018-03-15 14:30 ` Zhang, Qi Z
2018-03-15 15:22 ` Ananyev, Konstantin
2018-03-16 0:52 ` Zhang, Qi Z
2018-03-16 9:54 ` Ananyev, Konstantin
2018-03-16 11:00 ` Bruce Richardson
2018-03-16 13:18 ` Zhang, Qi Z
2018-03-16 14:15 ` Zhang, Qi Z
2018-03-16 18:47 ` Ananyev, Konstantin
2018-03-18 7:55 ` Zhang, Qi Z
2018-03-20 13:18 ` Ananyev, Konstantin
2018-03-21 1:53 ` Zhang, Qi Z
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 0/3] runtime " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 1/3] ether: support " Qi Zhang
2018-03-25 19:47 ` Ananyev, Konstantin
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add command for " Qi Zhang
2018-03-21 7:28 ` [dpdk-dev] [PATCH v3 3/3] net/i40e: enable runtime " Qi Zhang
2018-03-25 19:46 ` Ananyev, Konstantin
2018-03-26 8:49 ` Zhang, Qi Z
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 0/3] " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 1/3] ether: support " Qi Zhang
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add command for " Qi Zhang
2018-04-01 12:21 ` Ananyev, Konstantin
2018-03-26 8:59 ` [dpdk-dev] [PATCH v4 3/3] net/i40e: enable runtime " Qi Zhang
2018-04-01 12:18 ` Ananyev, Konstantin
2018-04-02 2:20 ` Zhang, Qi Z
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 0/3] " Qi Zhang
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 1/3] ether: support " Qi Zhang
2018-04-06 19:42 ` Rosen, Rami
2018-04-08 2:20 ` Zhang, Qi Z
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 2/3] app/testpmd: add command for " Qi Zhang
2018-04-07 15:49 ` Rosen, Rami
2018-04-08 2:22 ` Zhang, Qi Z
2018-04-02 2:59 ` [dpdk-dev] [PATCH v5 3/3] net/i40e: enable runtime " Qi Zhang
2018-04-02 23:36 ` [dpdk-dev] [PATCH v5 0/3] " Ananyev, Konstantin
2018-04-08 2:42 ` Qi Zhang
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 1/3] ether: support " Qi Zhang
2018-04-10 13:59 ` Thomas Monjalon
2018-04-20 11:14 ` Ferruh Yigit
2018-04-24 19:36 ` Thomas Monjalon
2018-04-25 5:33 ` Zhang, Qi Z
2018-04-25 7:54 ` Thomas Monjalon
2018-04-20 11:16 ` Ferruh Yigit
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 2/3] app/testpmd: add command for " Qi Zhang
2018-04-20 11:29 ` Ferruh Yigit
2018-04-22 11:57 ` Zhang, Qi Z
2018-04-08 2:42 ` [dpdk-dev] [PATCH v6 3/3] net/i40e: enable runtime " Qi Zhang
2018-04-20 11:17 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 0/5] " Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 1/5] ethdev: support " Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 2/5] app/testpmd: add command for " Qi Zhang
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: enable per queue configure Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: enable queue ring size configure Qi Zhang
2018-04-23 17:45 ` Ferruh Yigit
2018-04-24 3:16 ` Zhang, Qi Z
2018-04-24 11:05 ` Ferruh Yigit
2018-04-22 11:58 ` [dpdk-dev] [PATCH v7 5/5] net/i40e: enable runtime queue setup Qi Zhang
2018-04-23 17:45 ` [dpdk-dev] [PATCH v7 0/5] " Ferruh Yigit
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 1/5] ethdev: support " Qi Zhang
2018-04-24 14:01 ` Thomas Monjalon
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 2/5] app/testpmd: add command for " Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 3/5] app/testpmd: enable per queue configure Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 4/5] app/testpmd: enable queue ring size configure Qi Zhang
2018-04-24 12:44 ` [dpdk-dev] [PATCH v8 5/5] net/i40e: enable runtime queue setup Qi Zhang
2018-04-24 14:50 ` [dpdk-dev] [PATCH v8 0/5] " Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).