From: Hanumanth Pothula <hpothula@marvell.com>
To: Aman Singh <aman.deep.singh@intel.com>,
Yuying Zhang <yuying.zhang@intel.com>
Cc: <dev@dpdk.org>, <andrew.rybchenko@oktetlabs.ru>,
<thomas@monjalon.net>, <yux.jiang@intel.com>,
<jerinj@marvell.com>, <ndabilpuram@marvell.com>,
<hpothula@marvell.com>
Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
Date: Mon, 21 Nov 2022 23:37:56 +0530 [thread overview]
Message-ID: <20221121180756.3924770-1-hpothula@marvell.com> (raw)
In-Reply-To: <20221121143347.3923255-1-hpothula@marvell.com>
Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.
Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.
Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
---
v7:
- Update testpmd argument name from multi-mempool to multi-rx-mempool.
- Upated defination of testpmd argument, mbuf-size.
- Resolved indentations.
v6:
- Updated run_app.rst file with multi-mempool argument.
- defined and populated multi_mempool at related arguments.
- invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
- Added testpmd argument to enable multi-mempool feature.
- Simplified logic to distinguish between multi-mempool,
multi-segment and single pool/segment.
v4:
- updated if condition.
v3:
- Simplified conditional check.
- Corrected spell, whether.
v2:
- Rebased on tip of next-net/main.
---
app/test-pmd/parameters.c | 7 ++-
app/test-pmd/testpmd.c | 64 ++++++++++++++++-----------
app/test-pmd/testpmd.h | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 4 ++
4 files changed, 50 insertions(+), 26 deletions(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..af9ec39cf9 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -88,7 +88,8 @@ usage(char* progname)
"in NUMA mode.\n");
printf(" --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
"N bytes. If multiple numbers are specified the extra pools "
- "will be created to receive with packet split features\n");
+ "will be created to receive packets based on the features "
+ "supported, like buufer-split, multi-mempool.\n");
printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
"in mbuf pools.\n");
printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
@@ -155,6 +156,7 @@ usage(char* progname)
printf(" --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
printf(" --txpkts=X[,Y]*: set TX segment sizes"
" or total packet length.\n");
+ printf(" --multi-rx-mempool: enable multi-mempool support\n");
printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n");
printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n");
printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
{ "rxpkts", 1, 0, 0 },
{ "rxhdrs", 1, 0, 0 },
{ "txpkts", 1, 0, 0 },
+ { "multi-rx-mempool", 0, 0, 0 },
{ "txonly-multi-flow", 0, 0, 0 },
{ "rxq-share", 2, 0, 0 },
{ "eth-link-speed", 1, 0, 0 },
@@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
else
rte_exit(EXIT_FAILURE, "bad txpkts\n");
}
+ if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
+ multi_rx_mempool = 1;
if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
txonly_multi_flow = 1;
if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..716937925e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
*/
uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
uint8_t rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */
uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
uint32_t prev_hdrs = 0;
int ret;
- /* Verify Rx queue configuration is single pool and segment or
- * multiple pool/segment.
- * @see rte_eth_rxconf::rx_mempools
- * @see rte_eth_rxconf::rx_seg
- */
- if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
- ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
- /* Single pool/segment configuration */
- rx_conf->rx_seg = NULL;
- rx_conf->rx_nseg = 0;
- ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
- nb_rx_desc, socket_id,
- rx_conf, mp);
- goto exit;
- }
- if (rx_pkt_nb_segs > 1 ||
- rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+ if ((rx_pkt_nb_segs > 1) &&
+ (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
/* multi-segment configuration */
for (i = 0; i < rx_pkt_nb_segs; i++) {
struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,22 +2687,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
}
rx_conf->rx_nseg = rx_pkt_nb_segs;
rx_conf->rx_seg = rx_useg;
- } else {
+ rx_conf->rx_mempools = NULL;
+ rx_conf->rx_nmempool = 0;
+ ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+ socket_id, rx_conf, NULL);
+ rx_conf->rx_seg = NULL;
+ rx_conf->rx_nseg = 0;
+ } else if (multi_rx_mempool == 1) {
/* multi-pool configuration */
+ struct rte_eth_dev_info dev_info;
+
+ if (mbuf_data_size_n <= 1) {
+ RTE_LOG(ERR, EAL, "invalid number of mempools %u",
+ mbuf_data_size_n);
+ return -EINVAL;
+ }
+ ret = rte_eth_dev_info_get(port_id, &dev_info);
+ if (ret != 0)
+ return ret;
+ if (dev_info.max_rx_mempools == 0) {
+ RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
+ return -ENOTSUP;
+ }
for (i = 0; i < mbuf_data_size_n; i++) {
mpx = mbuf_pool_find(socket_id, i);
rx_mempool[i] = mpx ? mpx : mp;
}
rx_conf->rx_mempools = rx_mempool;
rx_conf->rx_nmempool = mbuf_data_size_n;
- }
- ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+ rx_conf->rx_seg = NULL;
+ rx_conf->rx_nseg = 0;
+ ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
socket_id, rx_conf, NULL);
- rx_conf->rx_seg = NULL;
- rx_conf->rx_nseg = 0;
- rx_conf->rx_mempools = NULL;
- rx_conf->rx_nmempool = 0;
-exit:
+ rx_conf->rx_mempools = NULL;
+ rx_conf->rx_nmempool = 0;
+ } else {
+ /* Single pool/segment configuration */
+ rx_conf->rx_seg = NULL;
+ rx_conf->rx_nseg = 0;
+ rx_conf->rx_mempools = NULL;
+ rx_conf->rx_nmempool = 0;
+ ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+ socket_id, rx_conf, mp);
+ }
+
ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
RTE_ETH_QUEUE_STATE_STOPPED :
RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index aaf69c349a..0596d38cd2 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */
+extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */
extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
extern uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 610e442924..af84b2260a 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -365,6 +365,10 @@ The command line options are:
Set TX segment sizes or total packet length. Valid for ``tx-only``
and ``flowgen`` forwarding modes.
+* ``--multi-rx-mempool``
+
+ Enable multi-mempool, multiple mbuf pools per Rx queue, support.
+
* ``--txonly-multi-flow``
Generate multiple flows in txonly mode.
--
2.25.1
next prev parent reply other threads:[~2022-11-21 18:08 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-17 11:30 [PATCH v1 " Hanumanth Pothula
2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
2022-11-17 15:00 ` Singh, Aman Deep
2022-11-17 15:58 ` [EXT] " Hanumanth Reddy Pothula
2022-11-17 16:03 ` [PATCH v3 " Hanumanth Pothula
2022-11-17 23:36 ` Ferruh Yigit
2022-11-18 6:51 ` Han, YingyaX
2022-11-18 11:37 ` Hanumanth Reddy Pothula
2022-11-18 11:13 ` Hanumanth Pothula
2022-11-18 14:13 ` [PATCH v4 " Hanumanth Pothula
2022-11-18 20:56 ` Ferruh Yigit
2022-11-19 0:00 ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 10:08 ` Ferruh Yigit
2022-11-21 10:44 ` Hanumanth Reddy Pothula
2022-11-21 12:45 ` [PATCH v5 " Hanumanth Pothula
2022-11-21 13:22 ` Ferruh Yigit
2022-11-21 13:36 ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 14:10 ` Ferruh Yigit
2022-11-21 14:33 ` [PATCH v6 " Hanumanth Pothula
2022-11-21 17:31 ` Ferruh Yigit
2022-11-21 17:45 ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 18:05 ` Ferruh Yigit
2022-11-21 18:07 ` Hanumanth Pothula [this message]
2022-11-21 18:40 ` [PATCH v7 " Ferruh Yigit
2022-11-22 6:42 ` Han, YingyaX
2022-11-22 6:52 ` Tang, Yaqi
2022-11-22 8:33 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221121180756.3924770-1-hpothula@marvell.com \
--to=hpothula@marvell.com \
--cc=aman.deep.singh@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=ndabilpuram@marvell.com \
--cc=thomas@monjalon.net \
--cc=yux.jiang@intel.com \
--cc=yuying.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).