* [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility @ 2016-05-05 22:46 Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang ` (9 more replies) 0 siblings, 10 replies; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:46 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon This patch enables vhost/virtio pmd performance loopback test in testpmd. All the features are for general usage. The loopback test focuses on the maximum full-path packet forwarding performance between host and guest, it runs vhost/virtio pmd only without introducing extra overhead. Therefore, the main requirement is traffic generation, since there's no other packet generators like IXIA to help. In current testpmd, io-fwd is the ideal candidate to perform this loopback test because it's the fastest possible forwarding engine: Start testpmd io-fwd in host with 1 vhost pmd port, and start testpmd io-fwd in the connected guest with 1 corresponding virtio pmd port, and these 2 ports form a forwarding loop, packets received by the host vhost pmd port are forwarded to the guest virtio pmd port, and packets received by the guest virtio pmd port are sent to the host vhost pmd port. As to traffic generation, "start tx_first" injects a burst of packets into the loop, which is the ideal way to do that. However 2 issues remain: 1. If only 1 burst of packets are injected in the loop, there will almost definitely be empty rx operations, e.g. When guest virtio pmd port send burst to the host, then it starts the rx immediately, it's likely the packets are still being forwarded by host vhost pmd port and haven't reached the guest yet. We need to fill up the ring to keep all pmds busy. 2. io-fwd doesn't provide retry mechanism, so if packet loss occurs, there won't be a full burst in the loop. To address these issues, this patch: 1. Add an io_retry-fwd in testpmd to prevent most packet losses. 2. Add parameter to enable configurable tx_first burst number. Other related improvements include: 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a single core for each rxq which causes inconvenience and confusion. 2. Show topology at forwarding start: "show config fwd" also does this, but show it directly can reduce the possibility of mis-configuration. 3. Add throughput information in port statistics display for "show port stats (port_id|all)". Finally there's documentation update. Example on how to enable vhost/virtio performance loopback test: 1. Start testpmd in host with 1 vhost pmd port only. 2. Start testpmd in guest with only 1 virtio pmd port connected to the corresponding vhost pmd port. 3. "set fwd io_retry" in testpmds in both host and guest. 4. "start" in testpmd in guest. 5. "start tx_first 8" in testpmd in host. Then use "show port stats all" to monitor the performance. Zhihong Wang (6): testpmd: add io_retry forwarding testpmd: configurable tx_first burst number testpmd: show throughput in port stats testpmd: handle all rxqs in rss setup testpmd: show topology at forwarding start testpmd: update documentation app/test-pmd/Makefile | 1 + app/test-pmd/cmdline.c | 41 ++++++++ app/test-pmd/config.c | 28 ++++-- app/test-pmd/iofwd-retry.c | 139 ++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 10 +- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/run_app.rst | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 10 +- 8 files changed, 218 insertions(+), 13 deletions(-) create mode 100644 app/test-pmd/iofwd-retry.c -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang @ 2016-05-05 22:46 ` Zhihong Wang 2016-05-25 9:32 ` Thomas Monjalon 2016-05-05 22:46 ` [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number Zhihong Wang ` (8 subsequent siblings) 9 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:46 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch adds an io_retry-fwd in testpmd to prevent most packet losses. It can be enabled by "set fwd io_retry". io-fwd is the fastest possible forwarding engine, good for basic performance test. Adding retry mechanism expands test case coverage to support scenarios where packet loss affects test results. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/Makefile | 1 + app/test-pmd/iofwd-retry.c | 139 +++++++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 1 + app/test-pmd/testpmd.h | 1 + 4 files changed, 142 insertions(+) create mode 100644 app/test-pmd/iofwd-retry.c diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile index 72426f3..a6735cf 100644 --- a/app/test-pmd/Makefile +++ b/app/test-pmd/Makefile @@ -49,6 +49,7 @@ SRCS-y += parameters.c SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += cmdline.c SRCS-y += config.c SRCS-y += iofwd.c +SRCS-y += iofwd-retry.c SRCS-y += macfwd.c SRCS-y += macfwd-retry.c SRCS-y += macswap.c diff --git a/app/test-pmd/iofwd-retry.c b/app/test-pmd/iofwd-retry.c new file mode 100644 index 0000000..14c5660 --- /dev/null +++ b/app/test-pmd/iofwd-retry.c @@ -0,0 +1,139 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdarg.h> +#include <stdio.h> +#include <string.h> +#include <errno.h> +#include <stdint.h> +#include <unistd.h> +#include <inttypes.h> + +#include <sys/queue.h> +#include <sys/stat.h> + +#include <rte_common.h> +#include <rte_byteorder.h> +#include <rte_log.h> +#include <rte_debug.h> +#include <rte_cycles.h> +#include <rte_memory.h> +#include <rte_memzone.h> +#include <rte_launch.h> +#include <rte_eal.h> +#include <rte_per_lcore.h> +#include <rte_lcore.h> +#include <rte_atomic.h> +#include <rte_branch_prediction.h> +#include <rte_ring.h> +#include <rte_memory.h> +#include <rte_memcpy.h> +#include <rte_mempool.h> +#include <rte_mbuf.h> +#include <rte_interrupts.h> +#include <rte_pci.h> +#include <rte_ether.h> +#include <rte_ethdev.h> +#include <rte_string_fns.h> + +#include "testpmd.h" + +#define TX_RETRY_TIMES 128 +#define TX_RETRY_WAIT_US 1 + +/* + * I/O retry forwarding mode. + * Forward packets "as-is" without access to packet data. + */ +static void +pkt_burst_io_retry_forward(struct fwd_stream *fs) +{ + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + uint16_t nb_rx; + uint16_t nb_tx; + uint32_t retry; + +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES + uint64_t start_tsc; + uint64_t end_tsc; + uint64_t core_cycles; +#endif + +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES + start_tsc = rte_rdtsc(); +#endif + + nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, + pkts_burst, nb_pkt_per_burst); + if (unlikely(nb_rx == 0)) + return; + fs->rx_packets += nb_rx; + +#ifdef RTE_TEST_PMD_RECORD_BURST_STATS + fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; +#endif + + retry = 0; + nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + pkts_burst, nb_rx); + while (unlikely(nb_tx < nb_rx) && (retry++ < TX_RETRY_TIMES)) { + rte_delay_us(TX_RETRY_WAIT_US); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + fs->tx_packets += nb_tx; + +#ifdef RTE_TEST_PMD_RECORD_BURST_STATS + fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; +#endif + + if (unlikely(nb_tx < nb_rx)) { + fs->fwd_dropped += (nb_rx - nb_tx); + do { + rte_pktmbuf_free(pkts_burst[nb_tx]); + } while (++nb_tx < nb_rx); + } + +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES + end_tsc = rte_rdtsc(); + core_cycles = (end_tsc - start_tsc); + fs->core_cycles = (uint64_t) (fs->core_cycles + core_cycles); +#endif +} + +struct fwd_engine io_retry_fwd_engine = { + .fwd_mode_name = "io_retry", + .port_fwd_begin = NULL, + .port_fwd_end = NULL, + .packet_fwd = pkt_burst_io_retry_forward, +}; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 26a174c..61abcf8 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -143,6 +143,7 @@ streamid_t nb_fwd_streams; /**< Is equal to (nb_ports * nb_rxq). */ */ struct fwd_engine * fwd_engines[] = { &io_fwd_engine, + &io_retry_fwd_engine, &mac_fwd_engine, &mac_retry_fwd_engine, &mac_swap_engine, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 0f72ca1..fbc18b0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -221,6 +221,7 @@ struct fwd_engine { }; extern struct fwd_engine io_fwd_engine; +extern struct fwd_engine io_retry_fwd_engine; extern struct fwd_engine mac_fwd_engine; extern struct fwd_engine mac_retry_fwd_engine; extern struct fwd_engine mac_swap_engine; -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding 2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang @ 2016-05-25 9:32 ` Thomas Monjalon 2016-05-26 2:40 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:32 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch 2016-05-05 18:46, Zhihong Wang: > --- a/app/test-pmd/testpmd.h > +++ b/app/test-pmd/testpmd.h > extern struct fwd_engine io_fwd_engine; > +extern struct fwd_engine io_retry_fwd_engine; > extern struct fwd_engine mac_fwd_engine; > extern struct fwd_engine mac_retry_fwd_engine; > extern struct fwd_engine mac_swap_engine; We now have 2 engines with "retry" behaviour. It is maybe the way to go, but I want to ask the question: Would it be possible to have "retry" as an engine parameter? ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding 2016-05-25 9:32 ` Thomas Monjalon @ 2016-05-26 2:40 ` Wang, Zhihong 2016-05-26 6:27 ` Thomas Monjalon 0 siblings, 1 reply; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 2:40 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, May 25, 2016 5:32 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 1/6] testpmd: add io_retry forwarding > > 2016-05-05 18:46, Zhihong Wang: > > --- a/app/test-pmd/testpmd.h > > +++ b/app/test-pmd/testpmd.h > > extern struct fwd_engine io_fwd_engine; > > +extern struct fwd_engine io_retry_fwd_engine; > > extern struct fwd_engine mac_fwd_engine; > > extern struct fwd_engine mac_retry_fwd_engine; > > extern struct fwd_engine mac_swap_engine; > > We now have 2 engines with "retry" behaviour. > It is maybe the way to go, but I want to ask the question: > Would it be possible to have "retry" as an engine parameter? > If it's just about the way to write commands there isn't much difference, like "set fwd io_rety" and "set fwd io retry". Do you mean to add the "retry" for all engines, and also implement this as a parameter in each original engine? So for example, no iofwd-retry.c, just add this feature inside iofwd.c? ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding 2016-05-26 2:40 ` Wang, Zhihong @ 2016-05-26 6:27 ` Thomas Monjalon 2016-05-26 9:24 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-26 6:27 UTC (permalink / raw) To: Wang, Zhihong Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo 2016-05-26 02:40, Wang, Zhihong: > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > 2016-05-05 18:46, Zhihong Wang: > > > --- a/app/test-pmd/testpmd.h > > > +++ b/app/test-pmd/testpmd.h > > > extern struct fwd_engine io_fwd_engine; > > > +extern struct fwd_engine io_retry_fwd_engine; > > > extern struct fwd_engine mac_fwd_engine; > > > extern struct fwd_engine mac_retry_fwd_engine; > > > extern struct fwd_engine mac_swap_engine; > > > > We now have 2 engines with "retry" behaviour. > > It is maybe the way to go, but I want to ask the question: > > Would it be possible to have "retry" as an engine parameter? > > > > If it's just about the way to write commands there isn't much difference, > like "set fwd io_rety" and "set fwd io retry". > > Do you mean to add the "retry" for all engines, and also implement this > as a parameter in each original engine? So for example, no iofwd-retry.c, > just add this feature inside iofwd.c? Yes, if it makes sense. For engines other than io_fwd and mac_fwd, the retry option can be unsupported (return an error) as a first step. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding 2016-05-26 6:27 ` Thomas Monjalon @ 2016-05-26 9:24 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 9:24 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Thursday, May 26, 2016 2:27 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 1/6] testpmd: add io_retry forwarding > > 2016-05-26 02:40, Wang, Zhihong: > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > > 2016-05-05 18:46, Zhihong Wang: > > > > --- a/app/test-pmd/testpmd.h > > > > +++ b/app/test-pmd/testpmd.h > > > > extern struct fwd_engine io_fwd_engine; > > > > +extern struct fwd_engine io_retry_fwd_engine; > > > > extern struct fwd_engine mac_fwd_engine; > > > > extern struct fwd_engine mac_retry_fwd_engine; > > > > extern struct fwd_engine mac_swap_engine; > > > > > > We now have 2 engines with "retry" behaviour. > > > It is maybe the way to go, but I want to ask the question: > > > Would it be possible to have "retry" as an engine parameter? > > > > > > > If it's just about the way to write commands there isn't much difference, > > like "set fwd io_rety" and "set fwd io retry". > > > > Do you mean to add the "retry" for all engines, and also implement this > > as a parameter in each original engine? So for example, no iofwd-retry.c, > > just add this feature inside iofwd.c? > > Yes, if it makes sense. For engines other than io_fwd and mac_fwd, the retry > option can be unsupported (return an error) as a first step. I think it makes sense in terms of making code more clear and manageable. ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang @ 2016-05-05 22:46 ` Zhihong Wang 2016-05-25 9:35 ` Thomas Monjalon 2016-05-05 22:46 ` [dpdk-dev] [PATCH 3/6] testpmd: show throughput in port stats Zhihong Wang ` (7 subsequent siblings) 9 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:46 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch enables configurable tx_first burst number. Use "start tx_first (burst_num)" to specify how many bursts of packets to be sent before forwarding start, or "start tx_first" like before for the default 1 burst send. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/cmdline.c | 41 +++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 7 +++++-- 2 files changed, 46 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index c5b9479..8f78cc6 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -5240,6 +5240,46 @@ cmdline_parse_inst_t cmd_start_tx_first = { }, }; +/* *** START FORWARDING WITH N TX BURST FIRST *** */ +struct cmd_start_tx_first_n_result { + cmdline_fixed_string_t start; + cmdline_fixed_string_t tx_first; + uint32_t tx_num; +}; + +static void +cmd_start_tx_first_n_parsed(__attribute__((unused)) void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_start_tx_first_n_result *res = parsed_result; + + start_packet_forwarding(res->tx_num); +} + +cmdline_parse_token_string_t cmd_start_tx_first_n_start = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + start, "start"); +cmdline_parse_token_string_t cmd_start_tx_first_n_tx_first = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_first, "tx_first"); +cmdline_parse_token_num_t cmd_start_tx_first_n_tx_num = + TOKEN_NUM_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_num, UINT32); + +cmdline_parse_inst_t cmd_start_tx_first_n = { + .f = cmd_start_tx_first_n_parsed, + .data = NULL, + .help_str = "start packet forwarding, after sending <num> " + "bursts of packets", + .tokens = { + (void *)&cmd_start_tx_first_n_start, + (void *)&cmd_start_tx_first_n_tx_first, + (void *)&cmd_start_tx_first_n_tx_num, + NULL, + }, +}; + /* *** SET LINK UP *** */ struct cmd_set_link_up_result { cmdline_fixed_string_t set; @@ -10399,6 +10439,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_showcfg, (cmdline_parse_inst_t *)&cmd_start, (cmdline_parse_inst_t *)&cmd_start_tx_first, + (cmdline_parse_inst_t *)&cmd_start_tx_first_n, (cmdline_parse_inst_t *)&cmd_set_link_up, (cmdline_parse_inst_t *)&cmd_set_link_down, (cmdline_parse_inst_t *)&cmd_reset, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 61abcf8..b9c8db9 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1037,8 +1037,11 @@ start_packet_forwarding(int with_tx_first) for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) (*port_fwd_begin)(fwd_ports_ids[i]); } - launch_packet_forwarding(run_one_txonly_burst_on_core); - rte_eal_mp_wait_lcore(); + while (with_tx_first--) { + launch_packet_forwarding( + run_one_txonly_burst_on_core); + rte_eal_mp_wait_lcore(); + } port_fwd_end = tx_only_engine.port_fwd_end; if (port_fwd_end != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number 2016-05-05 22:46 ` [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number Zhihong Wang @ 2016-05-25 9:35 ` Thomas Monjalon 2016-05-26 2:53 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:35 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch 2016-05-05 18:46, Zhihong Wang: > This patch enables configurable tx_first burst number. > > Use "start tx_first (burst_num)" to specify how many bursts of packets to > be sent before forwarding start, or "start tx_first" like before for the > default 1 burst send. The idea here is to fill the loopback latency gap with bursts. Would it be possible to make it automatic by detecting the first received packets to stop Tx generator? ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number 2016-05-25 9:35 ` Thomas Monjalon @ 2016-05-26 2:53 ` Wang, Zhihong 2016-05-26 6:31 ` Thomas Monjalon 0 siblings, 1 reply; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 2:53 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, May 25, 2016 5:35 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 2/6] testpmd: configurable tx_first burst number > > 2016-05-05 18:46, Zhihong Wang: > > This patch enables configurable tx_first burst number. > > > > Use "start tx_first (burst_num)" to specify how many bursts of packets to > > be sent before forwarding start, or "start tx_first" like before for the > > default 1 burst send. > > The idea here is to fill the loopback latency gap with bursts. > Would it be possible to make it automatic by detecting the first > received packets to stop Tx generator? The idea is great! The implementation might not be graceful though -- current tx_first mode first calls txonly engine before calling the actual engine, say iofwd, so iofwd is not established before tx_first is done, therefore no detection. It's possible to do this, but we need to implement another forward engine like "io_retry_fill_first" alone, it complicates testpmd just for this loop back test. Looks to me it's better to use combination of existing fwd engines to do this, it's also more flexible with burst number parameters. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number 2016-05-26 2:53 ` Wang, Zhihong @ 2016-05-26 6:31 ` Thomas Monjalon 2016-05-26 9:31 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-26 6:31 UTC (permalink / raw) To: Wang, Zhihong Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo 2016-05-26 02:53, Wang, Zhihong: > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > 2016-05-05 18:46, Zhihong Wang: > > > This patch enables configurable tx_first burst number. > > > > > > Use "start tx_first (burst_num)" to specify how many bursts of packets to > > > be sent before forwarding start, or "start tx_first" like before for the > > > default 1 burst send. > > > > The idea here is to fill the loopback latency gap with bursts. > > Would it be possible to make it automatic by detecting the first > > received packets to stop Tx generator? > > The idea is great! The implementation might not be graceful though > -- current tx_first mode first calls txonly engine before calling the > actual engine, say iofwd, so iofwd is not established before tx_first > is done, therefore no detection. And what about rewriting tx_first? No strong opinion. I let you and Pablo decide. > It's possible to do this, but we need to implement another forward > engine like "io_retry_fill_first" alone, it complicates testpmd just for > this loop back test. > > Looks to me it's better to use combination of existing fwd engines to > do this, it's also more flexible with burst number parameters. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number 2016-05-26 6:31 ` Thomas Monjalon @ 2016-05-26 9:31 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 9:31 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Thursday, May 26, 2016 2:32 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 2/6] testpmd: configurable tx_first burst number > > 2016-05-26 02:53, Wang, Zhihong: > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > > 2016-05-05 18:46, Zhihong Wang: > > > > This patch enables configurable tx_first burst number. > > > > > > > > Use "start tx_first (burst_num)" to specify how many bursts of packets to > > > > be sent before forwarding start, or "start tx_first" like before for the > > > > default 1 burst send. > > > > > > The idea here is to fill the loopback latency gap with bursts. > > > Would it be possible to make it automatic by detecting the first > > > received packets to stop Tx generator? > > > > The idea is great! The implementation might not be graceful though > > -- current tx_first mode first calls txonly engine before calling the > > actual engine, say iofwd, so iofwd is not established before tx_first > > is done, therefore no detection. > > And what about rewriting tx_first? > No strong opinion. I let you and Pablo decide. > I think the current way is better in terms of simplicity and flexibility. Also this "fill the ring" criteria doesn't fit other test scenarios, it's just for this loop back test, but tx_first is for all scenarios. > > It's possible to do this, but we need to implement another forward > > engine like "io_retry_fill_first" alone, it complicates testpmd just for > > this loop back test. > > > > Looks to me it's better to use combination of existing fwd engines to > > do this, it's also more flexible with burst number parameters. ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 3/6] testpmd: show throughput in port stats 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number Zhihong Wang @ 2016-05-05 22:46 ` Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup Zhihong Wang ` (6 subsequent siblings) 9 siblings, 0 replies; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:46 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch adds throughput numbers (in the period since last use of this command) in port statistics display for "show port stats (port_id|all)". Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1c552e4..bb0b542 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -92,6 +92,7 @@ #include <rte_ether.h> #include <rte_ethdev.h> #include <rte_string_fns.h> +#include <rte_cycles.h> #include "testpmd.h" @@ -150,6 +151,10 @@ print_ethaddr(const char *name, struct ether_addr *eth_addr) void nic_stats_display(portid_t port_id) { + static uint64_t sum_rx[RTE_MAX_ETHPORTS]; + static uint64_t sum_tx[RTE_MAX_ETHPORTS]; + static uint64_t cycles[RTE_MAX_ETHPORTS]; + uint64_t pkt_rx, pkt_tx, cycle; struct rte_eth_stats stats; struct rte_port *port = &ports[port_id]; uint8_t i; @@ -209,6 +214,21 @@ nic_stats_display(portid_t port_id) } } + cycle = cycles[port_id]; + cycles[port_id] = rte_rdtsc(); + if (cycle > 0) + cycle = cycles[port_id] - cycle; + + pkt_rx = stats.ipackets - sum_rx[port_id]; + pkt_tx = stats.opackets - sum_tx[port_id]; + sum_rx[port_id] = stats.ipackets; + sum_tx[port_id] = stats.opackets; + printf("\n Throughput (since last show)\n"); + printf(" RX-pps: %12"PRIu64"\n" + " TX-pps: %12"PRIu64"\n", + cycle > 0 ? pkt_rx * rte_get_tsc_hz() / cycle : 0, + cycle > 0 ? pkt_tx * rte_get_tsc_hz() / cycle : 0); + printf(" %s############################%s\n", nic_stats_border, nic_stats_border); } -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (2 preceding siblings ...) 2016-05-05 22:46 ` [dpdk-dev] [PATCH 3/6] testpmd: show throughput in port stats Zhihong Wang @ 2016-05-05 22:46 ` Zhihong Wang 2016-05-25 9:42 ` Thomas Monjalon 2016-05-05 22:47 ` [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start Zhihong Wang ` (5 subsequent siblings) 9 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:46 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch removes constraints in rxq handling when multiqueue is enabled to handle all the rxqs. Current testpmd forces a dedicated core for each rxq, some rxqs may be ignored when core number is less than rxq number, and that causes confusion and inconvenience. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index bb0b542..8ab2963 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1193,19 +1193,13 @@ rss_fwd_config_setup(void) cur_fwd_config.nb_fwd_ports = nb_fwd_ports; cur_fwd_config.nb_fwd_streams = (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) - cur_fwd_config.nb_fwd_streams = - (streamid_t)cur_fwd_config.nb_fwd_lcores; - else - cur_fwd_config.nb_fwd_lcores = - (lcoreid_t)cur_fwd_config.nb_fwd_streams; /* reinitialize forwarding streams */ init_fwd_streams(); setup_fwd_config_of_each_lcore(&cur_fwd_config); rxp = 0; rxq = 0; - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { struct fwd_stream *fs; fs = fwd_streams[lc_id]; -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup 2016-05-05 22:46 ` [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-05-25 9:42 ` Thomas Monjalon 2016-05-26 2:55 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:42 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch 2016-05-05 18:46, Zhihong Wang: > This patch removes constraints in rxq handling when multiqueue is enabled > to handle all the rxqs. > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > ignored when core number is less than rxq number, and that causes confusion > and inconvenience. I have the feeling that "constraints", "confusion" and "inconvenience" should be more explained. Please give some examples with not enough and too much cores. Thanks ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup 2016-05-25 9:42 ` Thomas Monjalon @ 2016-05-26 2:55 ` Wang, Zhihong 2016-06-03 9:22 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 2:55 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, May 25, 2016 5:42 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 4/6] testpmd: handle all rxqs in rss setup > > 2016-05-05 18:46, Zhihong Wang: > > This patch removes constraints in rxq handling when multiqueue is enabled > > to handle all the rxqs. > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > ignored when core number is less than rxq number, and that causes confusion > > and inconvenience. > > I have the feeling that "constraints", "confusion" and "inconvenience" > should be more explained. > Please give some examples with not enough and too much cores. Thanks Sure, will add detailed description in v2 ;) ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup 2016-05-26 2:55 ` Wang, Zhihong @ 2016-06-03 9:22 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-03 9:22 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Zhihong > Sent: Thursday, May 26, 2016 10:55 AM > To: Thomas Monjalon <thomas.monjalon@6wind.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup > > > > > -----Original Message----- > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > Sent: Wednesday, May 25, 2016 5:42 PM > > To: Wang, Zhihong <zhihong.wang@intel.com> > > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > > <pablo.de.lara.guarch@intel.com> > > Subject: Re: [PATCH 4/6] testpmd: handle all rxqs in rss setup > > > > 2016-05-05 18:46, Zhihong Wang: > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > to handle all the rxqs. > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > ignored when core number is less than rxq number, and that causes > confusion > > > and inconvenience. > > > > I have the feeling that "constraints", "confusion" and "inconvenience" > > should be more explained. > > Please give some examples with not enough and too much cores. Thanks > > Sure, will add detailed description in v2 ;) V2 has been sent. We see increasing examples looking for help on this "confusion", one recent example: http://openvswitch.org/pipermail/dev/2016-June/072110.html ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (3 preceding siblings ...) 2016-05-05 22:46 ` [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-05-05 22:47 ` Zhihong Wang 2016-05-25 9:45 ` Thomas Monjalon 2016-05-05 22:47 ` [dpdk-dev] [PATCH 6/6] testpmd: update documentation Zhihong Wang ` (4 subsequent siblings) 9 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:47 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch show topology at forwarding start. "show config fwd" also does this, but showing it directly can reduce the possibility of misconfiguration. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/testpmd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index b9c8db9..ef72a93 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1003,7 +1003,7 @@ start_packet_forwarding(int with_tx_first) if(!no_flush_rx) flush_fwd_rx_queues(); - fwd_config_setup(); + fwd_config_display(); rxtx_config_display(); for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start 2016-05-05 22:47 ` [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start Zhihong Wang @ 2016-05-25 9:45 ` Thomas Monjalon 2016-05-26 2:56 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:45 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch 2016-05-05 18:47, Zhihong Wang: > This patch show topology at forwarding start. > > "show config fwd" also does this, but showing it directly can reduce the > possibility of misconfiguration. [...] > - fwd_config_setup(); > + fwd_config_display(); > rxtx_config_display(); Having _display() calling _setup() is really strange. Maybe it is worth to be fixed in this patch. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start 2016-05-25 9:45 ` Thomas Monjalon @ 2016-05-26 2:56 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 2:56 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, May 25, 2016 5:45 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 5/6] testpmd: show topology at forwarding start > > 2016-05-05 18:47, Zhihong Wang: > > This patch show topology at forwarding start. > > > > "show config fwd" also does this, but showing it directly can reduce the > > possibility of misconfiguration. > [...] > > - fwd_config_setup(); > > + fwd_config_display(); > > rxtx_config_display(); > > Having _display() calling _setup() is really strange. > Maybe it is worth to be fixed in this patch. It looks strange to me too. Will look for a fix. ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 6/6] testpmd: update documentation 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (4 preceding siblings ...) 2016-05-05 22:47 ` [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start Zhihong Wang @ 2016-05-05 22:47 ` Zhihong Wang 2016-05-25 9:48 ` Thomas Monjalon 2016-05-20 8:54 ` [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Wang, Zhihong ` (3 subsequent siblings) 9 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-05-05 22:47 UTC (permalink / raw) To: dev; +Cc: konstantin.ananyev, bruce.richardson, thomas.monjalon, Zhihong Wang This patch updates documentation for testpmd. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- doc/guides/testpmd_app_ug/run_app.rst | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 10 +++++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index f605564..edd3e42 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -328,6 +328,7 @@ The commandline options are: Set the forwarding mode where ``mode`` is one of the following:: io (the default) + io_retry mac mac_retry mac_swap diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index aed5e47..7703c89 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -98,9 +98,11 @@ Start packet forwarding with current configuration:: start tx_first ~~~~~~~~~~~~~~ -Start packet forwarding with current configuration after sending one burst of packets:: +Start packet forwarding with current configuration after sending specified number of bursts of packets:: - testpmd> start tx_first + testpmd> start tx_first (""|burst_num) + +The default burst number is 1 when ``burst_num`` not presented. stop ~~~~ @@ -249,7 +251,7 @@ set fwd Set the packet forwarding mode:: - testpmd> set fwd (io|mac|mac_retry|macswap|flowgen| \ + testpmd> set fwd (io|io_retry|mac|mac_retry|macswap|flowgen| \ rxonly|txonly|csum|icmpecho) The available information categories are: @@ -258,6 +260,8 @@ The available information categories are: This is the fastest possible forwarding operation as it does not access packets data. This is the default mode. +* ``io_retry``: Forwards packets "as-is" in I/O retry mode. + * ``mac``: Changes the source and the destination Ethernet addresses of packets before forwarding them. * ``mac_retry``: Same as "mac" forwarding mode, but includes retries if the destination queue is full. -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] testpmd: update documentation 2016-05-05 22:47 ` [dpdk-dev] [PATCH 6/6] testpmd: update documentation Zhihong Wang @ 2016-05-25 9:48 ` Thomas Monjalon 2016-05-26 2:54 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:48 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch 2016-05-05 18:47, Zhihong Wang: > This patch updates documentation for testpmd. Please avoid such doc update patch. It is preferred to have the doc changes in the patch changing the code. Thanks for the good work! ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] testpmd: update documentation 2016-05-25 9:48 ` Thomas Monjalon @ 2016-05-26 2:54 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-05-26 2:54 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ananyev, Konstantin, Richardson, Bruce, De Lara Guarch, Pablo > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, May 25, 2016 5:48 PM > To: Wang, Zhihong <zhihong.wang@intel.com> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo > <pablo.de.lara.guarch@intel.com> > Subject: Re: [PATCH 6/6] testpmd: update documentation > > 2016-05-05 18:47, Zhihong Wang: > > This patch updates documentation for testpmd. > > Please avoid such doc update patch. > It is preferred to have the doc changes in the patch changing the code. > Thanks for the good work! Thanks for the hint! Will update in v2. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (5 preceding siblings ...) 2016-05-05 22:47 ` [dpdk-dev] [PATCH 6/6] testpmd: update documentation Zhihong Wang @ 2016-05-20 8:54 ` Wang, Zhihong 2016-05-25 9:27 ` Thomas Monjalon ` (2 subsequent siblings) 9 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-05-20 8:54 UTC (permalink / raw) To: Richardson, Bruce, thomas.monjalon; +Cc: Ananyev, Konstantin, dev > -----Original Message----- > From: Wang, Zhihong > Sent: Friday, May 6, 2016 6:47 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: [PATCH 0/6] vhost/virtio performance loopback utility > Hi Thomas, Bruce, Do you have any comments on this patch? Thanks Zhihong > This patch enables vhost/virtio pmd performance loopback test in testpmd. > All the features are for general usage. ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (6 preceding siblings ...) 2016-05-20 8:54 ` [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Wang, Zhihong @ 2016-05-25 9:27 ` Thomas Monjalon 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang 9 siblings, 0 replies; 54+ messages in thread From: Thomas Monjalon @ 2016-05-25 9:27 UTC (permalink / raw) To: Pablo de Lara; +Cc: Zhihong Wang, dev, konstantin.ananyev, bruce.richardson CC Pablo, testpmd maintainer Pablo, This patchset looks really valuable to improve performance debugging. Would you have time to dig into a review please? 2016-05-05 18:46, Zhihong Wang: > This patch enables vhost/virtio pmd performance loopback test in testpmd. > All the features are for general usage. > > The loopback test focuses on the maximum full-path packet forwarding > performance between host and guest, it runs vhost/virtio pmd only without > introducing extra overhead. > > Therefore, the main requirement is traffic generation, since there's no > other packet generators like IXIA to help. > > In current testpmd, io-fwd is the ideal candidate to perform this loopback > test because it's the fastest possible forwarding engine: Start testpmd > io-fwd in host with 1 vhost pmd port, and start testpmd io-fwd in the > connected guest with 1 corresponding virtio pmd port, and these 2 ports > form a forwarding loop, packets received by the host vhost pmd port are > forwarded to the guest virtio pmd port, and packets received by the guest > virtio pmd port are sent to the host vhost pmd port. > > As to traffic generation, "start tx_first" injects a burst of packets into > the loop, which is the ideal way to do that. > > However 2 issues remain: > > 1. If only 1 burst of packets are injected in the loop, there will > almost definitely be empty rx operations, e.g. When guest virtio pmd > port send burst to the host, then it starts the rx immediately, it's > likely the packets are still being forwarded by host vhost pmd port > and haven't reached the guest yet. > > We need to fill up the ring to keep all pmds busy. > > 2. io-fwd doesn't provide retry mechanism, so if packet loss occurs, > there won't be a full burst in the loop. > > To address these issues, this patch: > > 1. Add an io_retry-fwd in testpmd to prevent most packet losses. > > 2. Add parameter to enable configurable tx_first burst number. > > Other related improvements include: > > 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a > single core for each rxq which causes inconvenience and confusion. > > 2. Show topology at forwarding start: "show config fwd" also does this, > but show it directly can reduce the possibility of mis-configuration. > > 3. Add throughput information in port statistics display for "show port > stats (port_id|all)". > > Finally there's documentation update. ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 0/5] vhost/virtio performance loopback utility 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (7 preceding siblings ...) 2016-05-25 9:27 ` Thomas Monjalon @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang ` (4 more replies) 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang 9 siblings, 5 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon This patch enables vhost/virtio pmd performance loopback test in testpmd. All the features are for general usage. The loopback test focuses on the maximum full-path packet forwarding performance between host and guest, it runs vhost/virtio pmd only without introducing extra overhead. Therefore, the main requirement is traffic generation, since there's no other packet generators like IXIA to help. In current testpmd, iofwd is the best candidate to perform this loopback test because it's the fastest possible forwarding engine: Start testpmd iofwd in host with 1 vhost port, and start testpmd iofwd in the connected guest with 1 corresponding virtio port, and these 2 ports form a forwarding loop: Host vhost tx -> Guest virtio rx -> Guest virtio tx -> Host vhost rx. As to traffic generation, "start tx_first" injects a burst of packets into the loop. However 2 issues remain: 1. If only 1 burst of packets are injected in the loop, there will definitely be empty rx operations, e.g. When guest virtio port send burst to the host, then it starts the rx immediately, it's likely the packets are still being forwarded by host vhost port and haven't reached the guest yet. We need to fill up the ring to keep all pmds busy. 2. iofwd doesn't provide retry mechanism, so if packet loss occurs, there won't be a full burst in the loop. To address these issues, this patch: 1. Add retry option in testpmd to prevent most packet losses. 2. Add parameter to enable configurable tx_first burst number. Other related improvements include: 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a single core for each rxq which causes inconvenience and confusion. This change doesn't break anything, we can still force a single core for each rxq, by giving the same number of cores with the number of rxqs. One example: One Red Hat engineer was doing multiqueue test, there're 2 ports in guest each with 4 queues, and testpmd was used as the forwarding engine in guest, as usual he used 1 core for forwarding, as a results he only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of emails and quite some time are spent to root cause it, and of course it's caused by this unreasonable testpmd behavior. Moreover, even if we understand this behavior, if we want to test the above case, we still need 8 cores for a single guest to poll all the rxqs, obviously this is too expensive. We met quite a lot cases like this. 2. Show topology at forwarding start: "show config fwd" also does this, but show it directly can reduce the possibility of mis-configuration. Like the case above, if testpmd shows topology at forwarding start, then probably all those debugging efforts can be saved. 3. Add throughput information in port statistics display for "show port stats (port_id|all)". Finally there's documentation update. Example on how to enable vhost/virtio performance loopback test: 1. Start testpmd in host with 1 vhost port only. 2. Start testpmd in guest with only 1 virtio port connected to the corresponding vhost port. 3. "set fwd io retry" in testpmds in both host and guest. 4. "start" in testpmd in guest. 5. "start tx_first 16" in testpmd in host. Then use "show port stats all" to monitor the performance. -------------- Changes in v2: 1. Add retry as an option for existing forwarding engines except rxonly. 2. Minor code adjustment and more detailed patch description. Zhihong Wang (5): testpmd: add retry option testpmd: configurable tx_first burst number testpmd: show throughput in port stats testpmd: handle all rxqs in rss setup testpmd: show topology at forwarding start app/test-pmd/Makefile | 1 - app/test-pmd/cmdline.c | 118 +++++++++++++++++++- app/test-pmd/config.c | 79 +++++++++++--- app/test-pmd/csumonly.c | 12 ++ app/test-pmd/flowgen.c | 12 ++ app/test-pmd/icmpecho.c | 15 +++ app/test-pmd/iofwd.c | 22 +++- app/test-pmd/macfwd-retry.c | 164 ---------------------------- app/test-pmd/macfwd.c | 13 +++ app/test-pmd/macswap.c | 12 ++ app/test-pmd/testpmd.c | 13 ++- app/test-pmd/testpmd.h | 14 ++- app/test-pmd/txonly.c | 12 ++ doc/guides/testpmd_app_ug/run_app.rst | 1 - doc/guides/testpmd_app_ug/testpmd_funcs.rst | 16 +-- 15 files changed, 303 insertions(+), 201 deletions(-) delete mode 100644 app/test-pmd/macfwd-retry.c -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-07 9:28 ` De Lara Guarch, Pablo 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number Zhihong Wang ` (3 subsequent siblings) 4 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch adds retry option in testpmd to prevent most packet losses. It can be enabled by "set fwd <mode> retry". All modes except rxonly support this option. Adding retry mechanism expands test case coverage to support scenarios where packet loss affects test results. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/Makefile | 1 - app/test-pmd/cmdline.c | 75 ++++++++++++- app/test-pmd/config.c | 47 +++++++- app/test-pmd/csumonly.c | 12 ++ app/test-pmd/flowgen.c | 12 ++ app/test-pmd/icmpecho.c | 15 +++ app/test-pmd/iofwd.c | 22 +++- app/test-pmd/macfwd-retry.c | 164 ---------------------------- app/test-pmd/macfwd.c | 13 +++ app/test-pmd/macswap.c | 12 ++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 11 +- app/test-pmd/txonly.c | 12 ++ doc/guides/testpmd_app_ug/run_app.rst | 1 - doc/guides/testpmd_app_ug/testpmd_funcs.rst | 10 +- 15 files changed, 227 insertions(+), 184 deletions(-) delete mode 100644 app/test-pmd/macfwd-retry.c diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile index 40039a1..2a0b5a5 100644 --- a/app/test-pmd/Makefile +++ b/app/test-pmd/Makefile @@ -50,7 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += cmdline.c SRCS-y += config.c SRCS-y += iofwd.c SRCS-y += macfwd.c -SRCS-y += macfwd-retry.c SRCS-y += macswap.c SRCS-y += flowgen.c SRCS-y += rxonly.c diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index c5b9479..0af3f05 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -246,8 +246,8 @@ static void cmd_help_long_parsed(void *parsed_result, " Set number of packets per burst.\n\n" "set burst tx delay (microseconds) retry (num)\n" - " Set the transmit delay time and number of retries" - " in mac_retry forwarding mode.\n\n" + " Set the transmit delay time and number of retries," + " effective when retry is enabled.\n\n" "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" @@ -4480,6 +4480,7 @@ static void cmd_set_fwd_mode_parsed(void *parsed_result, { struct cmd_set_fwd_mode_result *res = parsed_result; + retry_enabled = 0; set_pkt_forwarding_mode(res->mode); } @@ -4525,6 +4526,74 @@ static void cmd_set_fwd_mode_init(void) token_struct->string_data.str = token; } +/* *** SET RETRY FORWARDING MODE *** */ +struct cmd_set_fwd_retry_mode_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t fwd; + cmdline_fixed_string_t mode; + cmdline_fixed_string_t retry; +}; + +static void cmd_set_fwd_retry_mode_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_set_fwd_retry_mode_result *res = parsed_result; + + retry_enabled = 1; + set_pkt_forwarding_mode(res->mode); +} + +cmdline_parse_token_string_t cmd_setfwd_retry_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + set, "set"); +cmdline_parse_token_string_t cmd_setfwd_retry_fwd = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + fwd, "fwd"); +cmdline_parse_token_string_t cmd_setfwd_retry_mode = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + mode, + "" /* defined at init */); +cmdline_parse_token_string_t cmd_setfwd_retry_retry = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + retry, "retry"); + +cmdline_parse_inst_t cmd_set_fwd_retry_mode = { + .f = cmd_set_fwd_retry_mode_parsed, + .data = NULL, + .help_str = NULL, /* defined at init */ + .tokens = { + (void *)&cmd_setfwd_retry_set, + (void *)&cmd_setfwd_retry_fwd, + (void *)&cmd_setfwd_retry_mode, + (void *)&cmd_setfwd_retry_retry, + NULL, + }, +}; + +static void cmd_set_fwd_retry_mode_init(void) +{ + char *modes, *c; + static char token[128]; + static char help[256]; + cmdline_parse_token_string_t *token_struct; + + modes = list_pkt_forwarding_retry_modes(); + snprintf(help, sizeof(help), "set fwd %s retry - " + "set packet forwarding mode with retry", modes); + cmd_set_fwd_retry_mode.help_str = help; + + /* string token separator is # */ + for (c = token; *modes != '\0'; modes++) + if (*modes == '|') + *c++ = '#'; + else + *c++ = *modes; + token_struct = (cmdline_parse_token_string_t *) + cmd_set_fwd_retry_mode.tokens[2]; + token_struct->string_data.str = token; +} + /* *** SET BURST TX DELAY TIME RETRY NUMBER *** */ struct cmd_set_burst_tx_retry_result { cmdline_fixed_string_t set; @@ -10408,6 +10477,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_fwd_list, (cmdline_parse_inst_t *)&cmd_set_fwd_mask, (cmdline_parse_inst_t *)&cmd_set_fwd_mode, + (cmdline_parse_inst_t *)&cmd_set_fwd_retry_mode, (cmdline_parse_inst_t *)&cmd_set_burst_tx_retry, (cmdline_parse_inst_t *)&cmd_set_promisc_mode_one, (cmdline_parse_inst_t *)&cmd_set_promisc_mode_all, @@ -10546,6 +10616,7 @@ prompt(void) { /* initialize non-constant commands */ cmd_set_fwd_mode_init(); + cmd_set_fwd_retry_mode_init(); testpmd_cl = cmdline_stdin_new(main_ctx, "testpmd> "); if (testpmd_cl == NULL) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1c552e4..c611649 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -893,8 +893,9 @@ fwd_lcores_config_display(void) void rxtx_config_display(void) { - printf(" %s packet forwarding - CRC stripping %s - " + printf(" %s packet forwarding%s - CRC stripping %s - " "packets/burst=%d\n", cur_fwd_eng->fwd_mode_name, + retry_enabled == 0 ? "" : " with retry", rx_mode.hw_strip_crc ? "enabled" : "disabled", nb_pkt_per_burst); @@ -1131,6 +1132,7 @@ simple_fwd_config_setup(void) fwd_streams[i]->tx_port = fwd_ports_ids[j]; fwd_streams[i]->tx_queue = 0; fwd_streams[i]->peer_addr = j; + fwd_streams[i]->retry_enabled = retry_enabled; if (port_topology == PORT_TOPOLOGY_PAIRED) { fwd_streams[j]->rx_port = fwd_ports_ids[j]; @@ -1138,6 +1140,7 @@ simple_fwd_config_setup(void) fwd_streams[j]->tx_port = fwd_ports_ids[i]; fwd_streams[j]->tx_queue = 0; fwd_streams[j]->peer_addr = i; + fwd_streams[j]->retry_enabled = retry_enabled; } } } @@ -1206,6 +1209,7 @@ rss_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; rxq = (queueid_t) (rxq + 1); if (rxq < nb_q) continue; @@ -1280,6 +1284,7 @@ dcb_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = txq + j % nb_tx_queue; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; } fwd_lcores[lc_id]->stream_nb += rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue; @@ -1350,6 +1355,7 @@ icmp_echo_config_setup(void) fs->tx_port = fs->rx_port; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; if (verbose_level > 0) printf(" stream=%d port=%d rxq=%d txq=%d\n", sm_id, fs->rx_port, fs->rx_queue, @@ -1388,14 +1394,15 @@ pkt_fwd_config_display(struct fwd_config *cfg) lcoreid_t lc_id; streamid_t sm_id; - printf("%s packet forwarding - ports=%d - cores=%d - streams=%d - " + printf("%s packet forwarding%s - ports=%d - cores=%d - streams=%d - " "NUMA support %s, MP over anonymous pages %s\n", cfg->fwd_eng->fwd_mode_name, + retry_enabled == 0 ? "" : " with retry", cfg->nb_fwd_ports, cfg->nb_fwd_lcores, cfg->nb_fwd_streams, numa_support == 1 ? "enabled" : "disabled", mp_anon != 0 ? "enabled" : "disabled"); - if (strcmp(cfg->fwd_eng->fwd_mode_name, "mac_retry") == 0) + if (retry_enabled) printf("TX retry num: %u, delay between TX retries: %uus\n", burst_tx_retry_num, burst_tx_delay_time); for (lc_id = 0; lc_id < cfg->nb_fwd_lcores; lc_id++) { @@ -1684,17 +1691,47 @@ list_pkt_forwarding_modes(void) return fwd_modes; } +char* +list_pkt_forwarding_retry_modes(void) +{ + static char fwd_modes[128] = ""; + const char *separator = "|"; + struct fwd_engine *fwd_eng; + unsigned i = 0; + + if (strlen(fwd_modes) == 0) { + while ((fwd_eng = fwd_engines[i++]) != NULL) { + if (fwd_eng == &rx_only_engine) + continue; + strncat(fwd_modes, fwd_eng->fwd_mode_name, + sizeof(fwd_modes) - + strlen(fwd_modes) - 1); + strncat(fwd_modes, separator, + sizeof(fwd_modes) - + strlen(fwd_modes) - 1); + } + fwd_modes[strlen(fwd_modes) - strlen(separator)] = '\0'; + } + + return fwd_modes; +} + void set_pkt_forwarding_mode(const char *fwd_mode_name) { struct fwd_engine *fwd_eng; unsigned i; + if (test_done == 0) { + printf("Please stop forwarding first\n"); + return; + } i = 0; while ((fwd_eng = fwd_engines[i]) != NULL) { if (! strcmp(fwd_eng->fwd_mode_name, fwd_mode_name)) { - printf("Set %s packet forwarding mode\n", - fwd_mode_name); + printf("Set %s packet forwarding mode%s\n", + fwd_mode_name, + retry_enabled == 0 ? "" : " with retry"); cur_fwd_eng = fwd_eng; return; } diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 7e4f662..4a9be62 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -643,6 +643,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t i; uint64_t ol_flags; uint16_t testpmd_ol_flags; + uint32_t retry; uint32_t rx_bad_ip_csum; uint32_t rx_bad_l4_csum; struct testpmd_offload_info info; @@ -845,6 +846,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) } } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; fs->rx_bad_ip_csum += rx_bad_ip_csum; fs->rx_bad_l4_csum += rx_bad_l4_csum; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 18b754b..a6abe91 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -131,6 +131,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs) uint16_t nb_tx; uint16_t nb_pkt; uint16_t i; + uint32_t retry; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -207,6 +208,17 @@ pkt_burst_flow_gen(struct fwd_stream *fs) } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index e510f9b..3daf55a 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -311,6 +311,7 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) struct ipv4_hdr *ip_h; struct icmp_hdr *icmp_h; struct ether_addr eth_addr; + uint32_t retry; uint32_t ip_addr; uint16_t nb_rx; uint16_t nb_tx; @@ -515,6 +516,20 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) if (nb_replies > 0) { nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_replies); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_replies) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_replies && + retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, + fs->tx_queue, + &pkts_burst[nb_tx], + nb_replies - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 8840d86..7b6033a 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -80,6 +80,8 @@ pkt_burst_io_forward(struct fwd_stream *fs) struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; uint16_t nb_rx; uint16_t nb_tx; + uint32_t retry; + #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -93,16 +95,28 @@ pkt_burst_io_forward(struct fwd_stream *fs) /* * Receive a burst of packets and forward them. */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); + nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, + pkts_burst, nb_pkt_per_burst); if (unlikely(nb_rx == 0)) return; + fs->rx_packets += nb_rx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; #endif - fs->rx_packets += nb_rx; - nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/macfwd-retry.c b/app/test-pmd/macfwd-retry.c deleted file mode 100644 index 3a96b3d..0000000 --- a/app/test-pmd/macfwd-retry.c +++ /dev/null @@ -1,164 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include <stdarg.h> -#include <string.h> -#include <stdio.h> -#include <errno.h> -#include <stdint.h> -#include <unistd.h> -#include <inttypes.h> - -#include <sys/queue.h> -#include <sys/stat.h> - -#include <rte_common.h> -#include <rte_byteorder.h> -#include <rte_log.h> -#include <rte_debug.h> -#include <rte_cycles.h> -#include <rte_memory.h> -#include <rte_memcpy.h> -#include <rte_memzone.h> -#include <rte_launch.h> -#include <rte_eal.h> -#include <rte_per_lcore.h> -#include <rte_lcore.h> -#include <rte_atomic.h> -#include <rte_branch_prediction.h> -#include <rte_ring.h> -#include <rte_memory.h> -#include <rte_mempool.h> -#include <rte_mbuf.h> -#include <rte_interrupts.h> -#include <rte_pci.h> -#include <rte_ether.h> -#include <rte_ethdev.h> -#include <rte_ip.h> -#include <rte_string_fns.h> - -#include "testpmd.h" - -#define BURST_TX_WAIT_US 10 -#define BURST_TX_RETRIES 5 - -/* - * Global variables that control number of retires and - * timeout (in us) between retires. - */ -uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; -uint32_t burst_tx_retry_num = BURST_TX_RETRIES; - -/* - * Forwarding of packets in MAC mode with a wait and retry on TX to reduce packet loss. - * Change the source and the destination Ethernet addressed of packets - * before forwarding them. - */ -static void -pkt_burst_mac_retry_forward(struct fwd_stream *fs) -{ - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - struct rte_mbuf *mb; - struct ether_hdr *eth_hdr; - uint32_t retry; - uint16_t nb_rx; - uint16_t nb_tx; - uint16_t i; -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - uint64_t start_tsc; - uint64_t end_tsc; - uint64_t core_cycles; -#endif - -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - start_tsc = rte_rdtsc(); -#endif - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - if (unlikely(nb_rx == 0)) - return; - -#ifdef RTE_TEST_PMD_RECORD_BURST_STATS - fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; -#endif - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) { - mb = pkts_burst[i]; - eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *); - ether_addr_copy(&peer_eth_addrs[fs->peer_addr], - ð_hdr->d_addr); - ether_addr_copy(&ports[fs->tx_port].eth_addr, - ð_hdr->s_addr); - } - nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); - - /* - * If not all packets have been TX'd then wait and retry. - */ - if (unlikely(nb_tx < nb_rx)) { - for (retry = 0; retry < burst_tx_retry_num; retry++) { - rte_delay_us(burst_tx_delay_time); - nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, - &pkts_burst[nb_tx], nb_rx - nb_tx); - if (nb_tx == nb_rx) - break; - } - } - - fs->tx_packets += nb_tx; -#ifdef RTE_TEST_PMD_RECORD_BURST_STATS - fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; -#endif - if (unlikely(nb_tx < nb_rx)) { - fs->fwd_dropped += (nb_rx - nb_tx); - do { - rte_pktmbuf_free(pkts_burst[nb_tx]); - } while (++nb_tx < nb_rx); - } -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - end_tsc = rte_rdtsc(); - core_cycles = (end_tsc - start_tsc); - fs->core_cycles = (uint64_t) (fs->core_cycles + core_cycles); -#endif -} - -struct fwd_engine mac_retry_fwd_engine = { - .fwd_mode_name = "mac_retry", - .port_fwd_begin = NULL, - .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_retry_forward, -}; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 3b7fffb..15c6e5d 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -81,6 +81,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) struct rte_port *txp; struct rte_mbuf *mb; struct ether_hdr *eth_hdr; + uint32_t retry; uint16_t nb_rx; uint16_t nb_tx; uint16_t i; @@ -126,6 +127,18 @@ pkt_burst_mac_forward(struct fwd_stream *fs) mb->vlan_tci_outer = txp->tx_vlan_id_outer; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } + fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 154889d..837e634 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -84,6 +84,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) uint16_t nb_rx; uint16_t nb_tx; uint16_t i; + uint32_t retry; uint64_t ol_flags = 0; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; @@ -128,6 +129,17 @@ pkt_burst_mac_swap(struct fwd_stream *fs) mb->vlan_tci_outer = txp->tx_vlan_id_outer; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 9d11830..7ab67b8 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -143,7 +143,6 @@ streamid_t nb_fwd_streams; /**< Is equal to (nb_ports * nb_rxq). */ struct fwd_engine * fwd_engines[] = { &io_fwd_engine, &mac_fwd_engine, - &mac_retry_fwd_engine, &mac_swap_engine, &flow_gen_engine, &rx_only_engine, @@ -158,6 +157,9 @@ struct fwd_engine * fwd_engines[] = { struct fwd_config cur_fwd_config; struct fwd_engine *cur_fwd_eng = &io_fwd_engine; /**< IO mode by default. */ +uint32_t retry_enabled; +uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; +uint32_t burst_tx_retry_num = BURST_TX_RETRIES; uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */ uint32_t param_total_num_mbufs = 0; /**< number of mbufs in all pools - if diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 0f72ca1..62ec055 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -103,6 +103,8 @@ struct fwd_stream { queueid_t tx_queue; /**< TX queue to send forwarded packets */ streamid_t peer_addr; /**< index of peer ethernet address of packets */ + unsigned int retry_enabled; + /* "read-write" results */ unsigned int rx_packets; /**< received packets */ unsigned int tx_packets; /**< received packets transmitted */ @@ -220,9 +222,14 @@ struct fwd_engine { packet_fwd_t packet_fwd; /**< Mandatory. */ }; +#define BURST_TX_WAIT_US 1 +#define BURST_TX_RETRIES 64 + +extern uint32_t burst_tx_delay_time; +extern uint32_t burst_tx_retry_num; + extern struct fwd_engine io_fwd_engine; extern struct fwd_engine mac_fwd_engine; -extern struct fwd_engine mac_retry_fwd_engine; extern struct fwd_engine mac_swap_engine; extern struct fwd_engine flow_gen_engine; extern struct fwd_engine rx_only_engine; @@ -380,6 +387,7 @@ extern int8_t tx_wthresh; extern struct fwd_config cur_fwd_config; extern struct fwd_engine *cur_fwd_eng; +extern uint32_t retry_enabled; extern struct fwd_lcore **fwd_lcores; extern struct fwd_stream **fwd_streams; @@ -523,6 +531,7 @@ void show_tx_pkt_segments(void); void set_tx_pkt_split(const char *name); void set_nb_pkt_per_burst(uint16_t pkt_burst); char *list_pkt_forwarding_modes(void); +char *list_pkt_forwarding_retry_modes(void); void set_pkt_forwarding_mode(const char *fwd_mode); void start_packet_forwarding(int with_tx_first); void stop_packet_forwarding(void); diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 0ac2a08..11fd681 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -193,6 +193,7 @@ pkt_burst_transmit(struct fwd_stream *fs) uint16_t nb_tx; uint16_t nb_pkt; uint16_t vlan_tci, vlan_tci_outer; + uint32_t retry; uint64_t ol_flags = 0; uint8_t i; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES @@ -273,6 +274,17 @@ pkt_burst_transmit(struct fwd_stream *fs) pkts_burst[nb_pkt] = pkt; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_pkt) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_pkt && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_pkt - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index f605564..b22ee07 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -329,7 +329,6 @@ The commandline options are: io (the default) mac - mac_retry mac_swap flowgen rxonly diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index aed5e47..03412db 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -249,8 +249,10 @@ set fwd Set the packet forwarding mode:: - testpmd> set fwd (io|mac|mac_retry|macswap|flowgen| \ - rxonly|txonly|csum|icmpecho) + testpmd> set fwd (io|mac|macswap|flowgen| \ + rxonly|txonly|csum|icmpecho) (""|retry) + +``retry`` can be specified for forwarding engines except ``rx_only``. The available information categories are: @@ -260,8 +262,6 @@ The available information categories are: * ``mac``: Changes the source and the destination Ethernet addresses of packets before forwarding them. -* ``mac_retry``: Same as "mac" forwarding mode, but includes retries if the destination queue is full. - * ``macswap``: MAC swap forwarding mode. Swaps the source and the destination Ethernet addresses of packets before forwarding them. @@ -392,7 +392,7 @@ Set number of packets per burst:: This is equivalent to the ``--burst command-line`` option. -In ``mac_retry`` forwarding mode, the transmit delay time and number of retries can also be set:: +When retry is enabled, the transmit delay time and number of retries can also be set:: testpmd> set burst tx delay (micrseconds) retry (num) -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang @ 2016-06-07 9:28 ` De Lara Guarch, Pablo 2016-06-08 1:29 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-07 9:28 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 1/5] testpmd: add retry option > > This patch adds retry option in testpmd to prevent most packet losses. > It can be enabled by "set fwd <mode> retry". All modes except rxonly > support this option. > > Adding retry mechanism expands test case coverage to support scenarios > where packet loss affects test results. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> ... > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst > @@ -249,8 +249,10 @@ set fwd > > Set the packet forwarding mode:: > > - testpmd> set fwd (io|mac|mac_retry|macswap|flowgen| \ > - rxonly|txonly|csum|icmpecho) > + testpmd> set fwd (io|mac|macswap|flowgen| \ > + rxonly|txonly|csum|icmpecho) (""|retry) > + > +``retry`` can be specified for forwarding engines except ``rx_only``. > > The available information categories are: > > @@ -260,8 +262,6 @@ The available information categories are: > > * ``mac``: Changes the source and the destination Ethernet addresses of > packets before forwarding them. > > -* ``mac_retry``: Same as "mac" forwarding mode, but includes retries if the > destination queue is full. > - > * ``macswap``: MAC swap forwarding mode. > Swaps the source and the destination Ethernet addresses of packets before > forwarding them. > > @@ -392,7 +392,7 @@ Set number of packets per burst:: > > This is equivalent to the ``--burst command-line`` option. > > -In ``mac_retry`` forwarding mode, the transmit delay time and number of > retries can also be set:: > +When retry is enabled, the transmit delay time and number of retries can > also be set:: > > testpmd> set burst tx delay (micrseconds) retry (num) Could you fix the typo "micrseconds" in this patch? > > -- > 2.5.0 Apart from this, Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option 2016-06-07 9:28 ` De Lara Guarch, Pablo @ 2016-06-08 1:29 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-08 1:29 UTC (permalink / raw) To: De Lara Guarch, Pablo, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, June 7, 2016 5:28 PM > To: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: RE: [PATCH v2 1/5] testpmd: add retry option > > > > > -----Original Message----- > > From: Wang, Zhihong > > Sent: Wednesday, June 01, 2016 4:28 AM > > To: dev@dpdk.org > > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > > thomas.monjalon@6wind.com; Wang, Zhihong > > Subject: [PATCH v2 1/5] testpmd: add retry option > > > > This patch adds retry option in testpmd to prevent most packet losses. > > It can be enabled by "set fwd <mode> retry". All modes except rxonly > > support this option. > > > > Adding retry mechanism expands test case coverage to support scenarios > > where packet loss affects test results. > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > ... > > > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst > > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst > > @@ -249,8 +249,10 @@ set fwd > > > > Set the packet forwarding mode:: > > > > - testpmd> set fwd (io|mac|mac_retry|macswap|flowgen| \ > > - rxonly|txonly|csum|icmpecho) > > + testpmd> set fwd (io|mac|macswap|flowgen| \ > > + rxonly|txonly|csum|icmpecho) (""|retry) > > + > > +``retry`` can be specified for forwarding engines except ``rx_only``. > > > > The available information categories are: > > > > @@ -260,8 +262,6 @@ The available information categories are: > > > > * ``mac``: Changes the source and the destination Ethernet addresses of > > packets before forwarding them. > > > > -* ``mac_retry``: Same as "mac" forwarding mode, but includes retries if the > > destination queue is full. > > - > > * ``macswap``: MAC swap forwarding mode. > > Swaps the source and the destination Ethernet addresses of packets > before > > forwarding them. > > > > @@ -392,7 +392,7 @@ Set number of packets per burst:: > > > > This is equivalent to the ``--burst command-line`` option. > > > > -In ``mac_retry`` forwarding mode, the transmit delay time and number of > > retries can also be set:: > > +When retry is enabled, the transmit delay time and number of retries can > > also be set:: > > > > testpmd> set burst tx delay (micrseconds) retry (num) > > Could you fix the typo "micrseconds" in this patch? Sure ;) > > > > > -- > > 2.5.0 > > Apart from this, > > Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-07 9:43 ` De Lara Guarch, Pablo 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats Zhihong Wang ` (2 subsequent siblings) 4 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch enables configurable tx_first burst number. Use "start tx_first (burst_num)" to specify how many bursts of packets to be sent before forwarding start, or "start tx_first" like before for the default 1 burst send. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/cmdline.c | 41 +++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 7 +++-- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 6 +++-- 3 files changed, 50 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 0af3f05..ef66d4e 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -5309,6 +5309,46 @@ cmdline_parse_inst_t cmd_start_tx_first = { }, }; +/* *** START FORWARDING WITH N TX BURST FIRST *** */ +struct cmd_start_tx_first_n_result { + cmdline_fixed_string_t start; + cmdline_fixed_string_t tx_first; + uint32_t tx_num; +}; + +static void +cmd_start_tx_first_n_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_start_tx_first_n_result *res = parsed_result; + + start_packet_forwarding(res->tx_num); +} + +cmdline_parse_token_string_t cmd_start_tx_first_n_start = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + start, "start"); +cmdline_parse_token_string_t cmd_start_tx_first_n_tx_first = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_first, "tx_first"); +cmdline_parse_token_num_t cmd_start_tx_first_n_tx_num = + TOKEN_NUM_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_num, UINT32); + +cmdline_parse_inst_t cmd_start_tx_first_n = { + .f = cmd_start_tx_first_n_parsed, + .data = NULL, + .help_str = "start packet forwarding, after sending <num> " + "bursts of packets", + .tokens = { + (void *)&cmd_start_tx_first_n_start, + (void *)&cmd_start_tx_first_n_tx_first, + (void *)&cmd_start_tx_first_n_tx_num, + NULL, + }, +}; + /* *** SET LINK UP *** */ struct cmd_set_link_up_result { cmdline_fixed_string_t set; @@ -10468,6 +10508,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_showcfg, (cmdline_parse_inst_t *)&cmd_start, (cmdline_parse_inst_t *)&cmd_start_tx_first, + (cmdline_parse_inst_t *)&cmd_start_tx_first_n, (cmdline_parse_inst_t *)&cmd_set_link_up, (cmdline_parse_inst_t *)&cmd_set_link_down, (cmdline_parse_inst_t *)&cmd_reset, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 7ab67b8..9b1d99c 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1043,8 +1043,11 @@ start_packet_forwarding(int with_tx_first) for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) (*port_fwd_begin)(fwd_ports_ids[i]); } - launch_packet_forwarding(run_one_txonly_burst_on_core); - rte_eal_mp_wait_lcore(); + while (with_tx_first--) { + launch_packet_forwarding( + run_one_txonly_burst_on_core); + rte_eal_mp_wait_lcore(); + } port_fwd_end = tx_only_engine.port_fwd_end; if (port_fwd_end != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 03412db..ff94593 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -98,9 +98,11 @@ Start packet forwarding with current configuration:: start tx_first ~~~~~~~~~~~~~~ -Start packet forwarding with current configuration after sending one burst of packets:: +Start packet forwarding with current configuration after sending specified number of bursts of packets:: - testpmd> start tx_first + testpmd> start tx_first (""|burst_num) + +The default burst number is 1 when ``burst_num`` not presented. stop ~~~~ -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number Zhihong Wang @ 2016-06-07 9:43 ` De Lara Guarch, Pablo 0 siblings, 0 replies; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-07 9:43 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 2/5] testpmd: configurable tx_first burst number > > This patch enables configurable tx_first burst number. > > Use "start tx_first (burst_num)" to specify how many bursts of packets to > be sent before forwarding start, or "start tx_first" like before for the > default 1 burst send. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> I agree with Zhihong. This patch is simple enough and does the job. Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number Zhihong Wang @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-07 10:02 ` De Lara Guarch, Pablo 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang 4 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch adds throughput numbers (in the period since last use of this command) in port statistics display for "show port stats (port_id|all)". Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index c611649..f487b87 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -92,6 +92,7 @@ #include <rte_ether.h> #include <rte_ethdev.h> #include <rte_string_fns.h> +#include <rte_cycles.h> #include "testpmd.h" @@ -150,6 +151,10 @@ print_ethaddr(const char *name, struct ether_addr *eth_addr) void nic_stats_display(portid_t port_id) { + static uint64_t sum_rx[RTE_MAX_ETHPORTS]; + static uint64_t sum_tx[RTE_MAX_ETHPORTS]; + static uint64_t cycles[RTE_MAX_ETHPORTS]; + uint64_t pkt_rx, pkt_tx, cycle; struct rte_eth_stats stats; struct rte_port *port = &ports[port_id]; uint8_t i; @@ -209,6 +214,21 @@ nic_stats_display(portid_t port_id) } } + cycle = cycles[port_id]; + cycles[port_id] = rte_rdtsc(); + if (cycle > 0) + cycle = cycles[port_id] - cycle; + + pkt_rx = stats.ipackets - sum_rx[port_id]; + pkt_tx = stats.opackets - sum_tx[port_id]; + sum_rx[port_id] = stats.ipackets; + sum_tx[port_id] = stats.opackets; + printf("\n Throughput (since last show)\n"); + printf(" RX-pps: %12"PRIu64"\n" + " TX-pps: %12"PRIu64"\n", + cycle > 0 ? pkt_rx * rte_get_tsc_hz() / cycle : 0, + cycle > 0 ? pkt_tx * rte_get_tsc_hz() / cycle : 0); + printf(" %s############################%s\n", nic_stats_border, nic_stats_border); } -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats Zhihong Wang @ 2016-06-07 10:02 ` De Lara Guarch, Pablo 2016-06-08 1:31 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-07 10:02 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 3/5] testpmd: show throughput in port stats > > This patch adds throughput numbers (in the period since last use of this > command) in port statistics display for "show port stats (port_id|all)". > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > --- > app/test-pmd/config.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index c611649..f487b87 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -92,6 +92,7 @@ > #include <rte_ether.h> > #include <rte_ethdev.h> > #include <rte_string_fns.h> > +#include <rte_cycles.h> > > #include "testpmd.h" > > @@ -150,6 +151,10 @@ print_ethaddr(const char *name, struct ether_addr > *eth_addr) > void > nic_stats_display(portid_t port_id) > { > + static uint64_t sum_rx[RTE_MAX_ETHPORTS]; > + static uint64_t sum_tx[RTE_MAX_ETHPORTS]; > + static uint64_t cycles[RTE_MAX_ETHPORTS]; > + uint64_t pkt_rx, pkt_tx, cycle; Could you rename some of these variables to something more specific? Like: pkt_rx -> diff_rx_pkts sum_rx -> prev_rx_pkts cycle -> diff_cycles cycles -> prev_cycles > struct rte_eth_stats stats; > struct rte_port *port = &ports[port_id]; > uint8_t i; > @@ -209,6 +214,21 @@ nic_stats_display(portid_t port_id) > } > } > > + cycle = cycles[port_id]; > + cycles[port_id] = rte_rdtsc(); > + if (cycle > 0) > + cycle = cycles[port_id] - cycle; > + > + pkt_rx = stats.ipackets - sum_rx[port_id]; > + pkt_tx = stats.opackets - sum_tx[port_id]; > + sum_rx[port_id] = stats.ipackets; > + sum_tx[port_id] = stats.opackets; > + printf("\n Throughput (since last show)\n"); > + printf(" RX-pps: %12"PRIu64"\n" > + " TX-pps: %12"PRIu64"\n", > + cycle > 0 ? pkt_rx * rte_get_tsc_hz() / cycle : 0, > + cycle > 0 ? pkt_tx * rte_get_tsc_hz() / cycle : 0); > + > printf(" %s############################%s\n", > nic_stats_border, nic_stats_border); > } > -- > 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats 2016-06-07 10:02 ` De Lara Guarch, Pablo @ 2016-06-08 1:31 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-08 1:31 UTC (permalink / raw) To: De Lara Guarch, Pablo, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, June 7, 2016 6:03 PM > To: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: RE: [PATCH v2 3/5] testpmd: show throughput in port stats > > > > > -----Original Message----- > > From: Wang, Zhihong > > Sent: Wednesday, June 01, 2016 4:28 AM > > To: dev@dpdk.org > > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > > thomas.monjalon@6wind.com; Wang, Zhihong > > Subject: [PATCH v2 3/5] testpmd: show throughput in port stats > > > > This patch adds throughput numbers (in the period since last use of this > > command) in port statistics display for "show port stats (port_id|all)". > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > --- > > app/test-pmd/config.c | 20 ++++++++++++++++++++ > > 1 file changed, 20 insertions(+) > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > index c611649..f487b87 100644 > > --- a/app/test-pmd/config.c > > +++ b/app/test-pmd/config.c > > @@ -92,6 +92,7 @@ > > #include <rte_ether.h> > > #include <rte_ethdev.h> > > #include <rte_string_fns.h> > > +#include <rte_cycles.h> > > > > #include "testpmd.h" > > > > @@ -150,6 +151,10 @@ print_ethaddr(const char *name, struct ether_addr > > *eth_addr) > > void > > nic_stats_display(portid_t port_id) > > { > > + static uint64_t sum_rx[RTE_MAX_ETHPORTS]; > > + static uint64_t sum_tx[RTE_MAX_ETHPORTS]; > > + static uint64_t cycles[RTE_MAX_ETHPORTS]; > > + uint64_t pkt_rx, pkt_tx, cycle; > > Could you rename some of these variables to something more specific? Thanks for the suggestion! Will rename them. > Like: > pkt_rx -> diff_rx_pkts > sum_rx -> prev_rx_pkts > cycle -> diff_cycles > cycles -> prev_cycles > > > > > struct rte_eth_stats stats; > > struct rte_port *port = &ports[port_id]; > > uint8_t i; > > @@ -209,6 +214,21 @@ nic_stats_display(portid_t port_id) > > } > > } > > > > + cycle = cycles[port_id]; > > + cycles[port_id] = rte_rdtsc(); > > + if (cycle > 0) > > + cycle = cycles[port_id] - cycle; > > + > > + pkt_rx = stats.ipackets - sum_rx[port_id]; > > + pkt_tx = stats.opackets - sum_tx[port_id]; > > + sum_rx[port_id] = stats.ipackets; > > + sum_tx[port_id] = stats.opackets; > > + printf("\n Throughput (since last show)\n"); > > + printf(" RX-pps: %12"PRIu64"\n" > > + " TX-pps: %12"PRIu64"\n", > > + cycle > 0 ? pkt_rx * rte_get_tsc_hz() / cycle : 0, > > + cycle > 0 ? pkt_tx * rte_get_tsc_hz() / cycle : 0); > > + > > printf(" %s############################%s\n", > > nic_stats_border, nic_stats_border); > > } > > -- > > 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang ` (2 preceding siblings ...) 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats Zhihong Wang @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-07 10:29 ` De Lara Guarch, Pablo 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang 4 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch removes constraints in rxq handling when multiqueue is enabled to handle all the rxqs. Current testpmd forces a dedicated core for each rxq, some rxqs may be ignored when core number is less than rxq number, and that causes confusion and inconvenience. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index f487b87..cfdacd8 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1196,19 +1196,13 @@ rss_fwd_config_setup(void) cur_fwd_config.nb_fwd_ports = nb_fwd_ports; cur_fwd_config.nb_fwd_streams = (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) - cur_fwd_config.nb_fwd_streams = - (streamid_t)cur_fwd_config.nb_fwd_lcores; - else - cur_fwd_config.nb_fwd_lcores = - (lcoreid_t)cur_fwd_config.nb_fwd_streams; /* reinitialize forwarding streams */ init_fwd_streams(); setup_fwd_config_of_each_lcore(&cur_fwd_config); rxp = 0; rxq = 0; - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { struct fwd_stream *fs; fs = fwd_streams[lc_id]; -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-06-07 10:29 ` De Lara Guarch, Pablo 2016-06-08 1:28 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-07 10:29 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 4/5] testpmd: handle all rxqs in rss setup > > This patch removes constraints in rxq handling when multiqueue is enabled > to handle all the rxqs. > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > ignored when core number is less than rxq number, and that causes > confusion > and inconvenience. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> Patch looks good, but you said that you were going to add a more detailed description in the commit message. Thanks, Pablo ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup 2016-06-07 10:29 ` De Lara Guarch, Pablo @ 2016-06-08 1:28 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-08 1:28 UTC (permalink / raw) To: De Lara Guarch, Pablo, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, June 7, 2016 6:30 PM > To: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: RE: [PATCH v2 4/5] testpmd: handle all rxqs in rss setup > > > > > -----Original Message----- > > From: Wang, Zhihong > > Sent: Wednesday, June 01, 2016 4:28 AM > > To: dev@dpdk.org > > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > > thomas.monjalon@6wind.com; Wang, Zhihong > > Subject: [PATCH v2 4/5] testpmd: handle all rxqs in rss setup > > > > This patch removes constraints in rxq handling when multiqueue is enabled > > to handle all the rxqs. > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > ignored when core number is less than rxq number, and that causes > > confusion > > and inconvenience. > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > Patch looks good, but you said that you were going to add a more detailed > description in the commit message. I added them in the cover letter. Will add them here too. > > Thanks, > Pablo ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang ` (3 preceding siblings ...) 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-06-01 3:27 ` Zhihong Wang 2016-06-07 10:56 ` De Lara Guarch, Pablo 2016-06-14 15:13 ` De Lara Guarch, Pablo 4 siblings, 2 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-01 3:27 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch show topology at forwarding start. "show config fwd" also does this, but showing it directly can reduce the possibility of misconfiguration. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/cmdline.c | 2 +- app/test-pmd/config.c | 4 ++-- app/test-pmd/testpmd.c | 2 +- app/test-pmd/testpmd.h | 3 +-- 4 files changed, 5 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index ef66d4e..bc800f8 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -5445,7 +5445,7 @@ static void cmd_showcfg_parsed(void *parsed_result, else if (!strcmp(res->what, "cores")) fwd_lcores_config_display(); else if (!strcmp(res->what, "fwd")) - fwd_config_display(); + fwd_config_setup_display(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); } diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cfdacd8..c70f308 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1383,7 +1383,7 @@ icmp_echo_config_setup(void) } } -void +static void fwd_config_setup(void) { cur_fwd_config.fwd_eng = cur_fwd_eng; @@ -1443,7 +1443,7 @@ pkt_fwd_config_display(struct fwd_config *cfg) void -fwd_config_display(void) +fwd_config_setup_display(void) { fwd_config_setup(); pkt_fwd_config_display(&cur_fwd_config); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 9b1d99c..b946034 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1009,7 +1009,7 @@ start_packet_forwarding(int with_tx_first) if(!no_flush_rx) flush_fwd_rx_queues(); - fwd_config_setup(); + fwd_config_setup_display(); rxtx_config_display(); for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 62ec055..5fd08e8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -480,9 +480,8 @@ void port_infos_display(portid_t port_id); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); -void fwd_config_display(void); +void fwd_config_setup_display(void); void rxtx_config_display(void); -void fwd_config_setup(void); void set_def_fwd_config(void); void reconfig(portid_t new_port_id, unsigned socket_id); int init_fwd_streams(void); -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang @ 2016-06-07 10:56 ` De Lara Guarch, Pablo 2016-06-14 15:13 ` De Lara Guarch, Pablo 1 sibling, 0 replies; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-07 10:56 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 5/5] testpmd: show topology at forwarding start > > This patch show topology at forwarding start. > > "show config fwd" also does this, but showing it directly can reduce the > possibility of misconfiguration. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang 2016-06-07 10:56 ` De Lara Guarch, Pablo @ 2016-06-14 15:13 ` De Lara Guarch, Pablo 2016-06-15 7:05 ` Wang, Zhihong 1 sibling, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-14 15:13 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon Hi Zhihong, > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 01, 2016 4:28 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v2 5/5] testpmd: show topology at forwarding start > > This patch show topology at forwarding start. > > "show config fwd" also does this, but showing it directly can reduce the > possibility of misconfiguration. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> [...] > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > index 9b1d99c..b946034 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -1009,7 +1009,7 @@ start_packet_forwarding(int with_tx_first) > if(!no_flush_rx) > flush_fwd_rx_queues(); > > - fwd_config_setup(); > + fwd_config_setup_display(); Bernard has made a patch that separates the display and setup of the configuration, (http://dpdk.org/dev/patchwork/patch/13650/) so fwd_config_display() does not call fwd_config_setup() anymore. Could you modify this patch, so you call fwd_config_setup() and fwd_config_display()? Sorry for the confusion, Pablo > rxtx_config_display(); > > for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start 2016-06-14 15:13 ` De Lara Guarch, Pablo @ 2016-06-15 7:05 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-15 7:05 UTC (permalink / raw) To: De Lara Guarch, Pablo, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, June 14, 2016 11:13 PM > To: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: RE: [PATCH v2 5/5] testpmd: show topology at forwarding start > > > Hi Zhihong, > > > -----Original Message----- > > From: Wang, Zhihong > > Sent: Wednesday, June 01, 2016 4:28 AM > > To: dev@dpdk.org > > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > > thomas.monjalon@6wind.com; Wang, Zhihong > > Subject: [PATCH v2 5/5] testpmd: show topology at forwarding start > > > > This patch show topology at forwarding start. > > > > "show config fwd" also does this, but showing it directly can reduce the > > possibility of misconfiguration. > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > [...] > > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > > index 9b1d99c..b946034 100644 > > --- a/app/test-pmd/testpmd.c > > +++ b/app/test-pmd/testpmd.c > > @@ -1009,7 +1009,7 @@ start_packet_forwarding(int with_tx_first) > > if(!no_flush_rx) > > flush_fwd_rx_queues(); > > > > - fwd_config_setup(); > > + fwd_config_setup_display(); > > Bernard has made a patch that separates the display and setup of the > configuration, > (http://dpdk.org/dev/patchwork/patch/13650/) > so fwd_config_display() does not call fwd_config_setup() anymore. > > Could you modify this patch, so you call fwd_config_setup() and > fwd_config_display()? Thanks for the info! I've updated this patch with a v3. Could you please help review? > > Sorry for the confusion, > Pablo > > > rxtx_config_display(); > > > > for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang ` (8 preceding siblings ...) 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option Zhihong Wang ` (5 more replies) 9 siblings, 6 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon This patch enables vhost/virtio pmd performance loopback test in testpmd. All the features are for general usage. The loopback test focuses on the maximum full-path packet forwarding performance between host and guest, it runs vhost/virtio pmd only without introducing extra overhead. Therefore, the main requirement is traffic generation, since there's no other packet generators like IXIA to help. In current testpmd, iofwd is the best candidate to perform this loopback test because it's the fastest possible forwarding engine: Start testpmd iofwd in host with 1 vhost port, and start testpmd iofwd in the connected guest with 1 corresponding virtio port, and these 2 ports form a forwarding loop: Host vhost Tx -> Guest virtio Rx -> Guest virtio Tx -> Host vhost Rx. As to traffic generation, "start tx_first" injects a burst of packets into the loop. However 2 issues remain: 1. If only 1 burst of packets are injected in the loop, there will definitely be empty Rx operations, e.g. When guest virtio port send burst to the host, then it starts the Rx immediately, it's likely the packets are still being forwarded by host vhost port and haven't reached the guest yet. We need to fill up the ring to keep all pmds busy. 2. iofwd doesn't provide retry mechanism, so if packet loss occurs, there won't be a full burst in the loop. To address these issues, this patch: 1. Add retry option in testpmd to prevent most packet losses. 2. Add parameter to enable configurable tx_first burst number. Other related improvements include: 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a single core for each rxq which causes inconvenience and confusion. This change doesn't break anything, we can still force a single core for each rxq, by giving the same number of cores with the number of rxqs. One example: One Red Hat engineer was doing multiqueue test, there're 2 ports in guest each with 4 queues, and testpmd was used as the forwarding engine in guest, as usual he used 1 core for forwarding, as a results he only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of emails and quite some time are spent to root cause it, and of course it's caused by this unreasonable testpmd behavior. Moreover, even if we understand this behavior, if we want to test the above case, we still need 8 cores for a single guest to poll all the rxqs, obviously this is too expensive. We met quite a lot cases like this, one recent example: http://openvswitch.org/pipermail/dev/2016-June/072110.html 2. Show topology at forwarding start: "show config fwd" also does this, but show it directly can reduce the possibility of mis-configuration. Like the case above, if testpmd shows topology at forwarding start, then probably all those debugging efforts can be saved. 3. Add throughput information in port statistics display for "show port stats (port_id|all)". Finally there's documentation update. Example on how to enable vhost/virtio performance loopback test: 1. Start testpmd in host with 1 vhost port only. 2. Start testpmd in guest with only 1 virtio port connected to the corresponding vhost port. 3. "set fwd io retry" in testpmds in both host and guest. 4. "start" in testpmd in guest. 5. "start tx_first 16" in testpmd in host. Then use "show port stats all" to monitor the performance. -------------- Changes in v2: 1. Add retry as an option for existing forwarding engines except rxonly. 2. Minor code adjustment and more detailed patch description. -------------- Changes in v3: 1. Add more details in commit log. 2. Give variables more meaningful names. 3. Fix a typo in existing doc. 4. Rebase the patches. Zhihong Wang (5): testpmd: add retry option testpmd: configurable tx_first burst number testpmd: show throughput in port stats testpmd: handle all rxqs in rss setup testpmd: show topology at forwarding start app/test-pmd/Makefile | 1 - app/test-pmd/cmdline.c | 116 ++++++++++++++++++- app/test-pmd/config.c | 74 ++++++++++-- app/test-pmd/csumonly.c | 12 ++ app/test-pmd/flowgen.c | 12 ++ app/test-pmd/icmpecho.c | 15 +++ app/test-pmd/iofwd.c | 22 +++- app/test-pmd/macfwd-retry.c | 167 ---------------------------- app/test-pmd/macfwd.c | 13 +++ app/test-pmd/macswap.c | 12 ++ app/test-pmd/testpmd.c | 12 +- app/test-pmd/testpmd.h | 11 +- app/test-pmd/txonly.c | 12 ++ doc/guides/testpmd_app_ug/run_app.rst | 1 - doc/guides/testpmd_app_ug/testpmd_funcs.rst | 18 +-- 15 files changed, 299 insertions(+), 199 deletions(-) delete mode 100644 app/test-pmd/macfwd-retry.c -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 2/5] testpmd: configurable tx_first burst number Zhihong Wang ` (4 subsequent siblings) 5 siblings, 0 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch adds retry option in testpmd to prevent most packet losses. It can be enabled by "set fwd <mode> retry". All modes except rxonly support this option. Adding retry mechanism expands test case coverage to support scenarios where packet loss affects test results. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> --- app/test-pmd/Makefile | 1 - app/test-pmd/cmdline.c | 75 ++++++++++++- app/test-pmd/config.c | 43 ++++++- app/test-pmd/csumonly.c | 12 ++ app/test-pmd/flowgen.c | 12 ++ app/test-pmd/icmpecho.c | 15 +++ app/test-pmd/iofwd.c | 22 +++- app/test-pmd/macfwd-retry.c | 167 ---------------------------- app/test-pmd/macfwd.c | 13 +++ app/test-pmd/macswap.c | 12 ++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 11 +- app/test-pmd/txonly.c | 12 ++ doc/guides/testpmd_app_ug/run_app.rst | 1 - doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 +- 15 files changed, 224 insertions(+), 188 deletions(-) delete mode 100644 app/test-pmd/macfwd-retry.c diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile index 40039a1..2a0b5a5 100644 --- a/app/test-pmd/Makefile +++ b/app/test-pmd/Makefile @@ -50,7 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) += cmdline.c SRCS-y += config.c SRCS-y += iofwd.c SRCS-y += macfwd.c -SRCS-y += macfwd-retry.c SRCS-y += macswap.c SRCS-y += flowgen.c SRCS-y += rxonly.c diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index fd389ac..e414c0f 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -246,8 +246,8 @@ static void cmd_help_long_parsed(void *parsed_result, " Set number of packets per burst.\n\n" "set burst tx delay (microseconds) retry (num)\n" - " Set the transmit delay time and number of retries" - " in mac_retry forwarding mode.\n\n" + " Set the transmit delay time and number of retries," + " effective when retry is enabled.\n\n" "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" @@ -4557,6 +4557,7 @@ static void cmd_set_fwd_mode_parsed(void *parsed_result, { struct cmd_set_fwd_mode_result *res = parsed_result; + retry_enabled = 0; set_pkt_forwarding_mode(res->mode); } @@ -4602,6 +4603,74 @@ static void cmd_set_fwd_mode_init(void) token_struct->string_data.str = token; } +/* *** SET RETRY FORWARDING MODE *** */ +struct cmd_set_fwd_retry_mode_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t fwd; + cmdline_fixed_string_t mode; + cmdline_fixed_string_t retry; +}; + +static void cmd_set_fwd_retry_mode_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_set_fwd_retry_mode_result *res = parsed_result; + + retry_enabled = 1; + set_pkt_forwarding_mode(res->mode); +} + +cmdline_parse_token_string_t cmd_setfwd_retry_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + set, "set"); +cmdline_parse_token_string_t cmd_setfwd_retry_fwd = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + fwd, "fwd"); +cmdline_parse_token_string_t cmd_setfwd_retry_mode = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + mode, + "" /* defined at init */); +cmdline_parse_token_string_t cmd_setfwd_retry_retry = + TOKEN_STRING_INITIALIZER(struct cmd_set_fwd_retry_mode_result, + retry, "retry"); + +cmdline_parse_inst_t cmd_set_fwd_retry_mode = { + .f = cmd_set_fwd_retry_mode_parsed, + .data = NULL, + .help_str = NULL, /* defined at init */ + .tokens = { + (void *)&cmd_setfwd_retry_set, + (void *)&cmd_setfwd_retry_fwd, + (void *)&cmd_setfwd_retry_mode, + (void *)&cmd_setfwd_retry_retry, + NULL, + }, +}; + +static void cmd_set_fwd_retry_mode_init(void) +{ + char *modes, *c; + static char token[128]; + static char help[256]; + cmdline_parse_token_string_t *token_struct; + + modes = list_pkt_forwarding_retry_modes(); + snprintf(help, sizeof(help), "set fwd %s retry - " + "set packet forwarding mode with retry", modes); + cmd_set_fwd_retry_mode.help_str = help; + + /* string token separator is # */ + for (c = token; *modes != '\0'; modes++) + if (*modes == '|') + *c++ = '#'; + else + *c++ = *modes; + token_struct = (cmdline_parse_token_string_t *) + cmd_set_fwd_retry_mode.tokens[2]; + token_struct->string_data.str = token; +} + /* *** SET BURST TX DELAY TIME RETRY NUMBER *** */ struct cmd_set_burst_tx_retry_result { cmdline_fixed_string_t set; @@ -10482,6 +10551,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_fwd_list, (cmdline_parse_inst_t *)&cmd_set_fwd_mask, (cmdline_parse_inst_t *)&cmd_set_fwd_mode, + (cmdline_parse_inst_t *)&cmd_set_fwd_retry_mode, (cmdline_parse_inst_t *)&cmd_set_burst_tx_retry, (cmdline_parse_inst_t *)&cmd_set_promisc_mode_one, (cmdline_parse_inst_t *)&cmd_set_promisc_mode_all, @@ -10621,6 +10691,7 @@ prompt(void) { /* initialize non-constant commands */ cmd_set_fwd_mode_init(); + cmd_set_fwd_retry_mode_init(); testpmd_cl = cmdline_stdin_new(main_ctx, "testpmd> "); if (testpmd_cl == NULL) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1c552e4..a85bb5f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -893,8 +893,9 @@ fwd_lcores_config_display(void) void rxtx_config_display(void) { - printf(" %s packet forwarding - CRC stripping %s - " + printf(" %s packet forwarding%s - CRC stripping %s - " "packets/burst=%d\n", cur_fwd_eng->fwd_mode_name, + retry_enabled == 0 ? "" : " with retry", rx_mode.hw_strip_crc ? "enabled" : "disabled", nb_pkt_per_burst); @@ -1131,6 +1132,7 @@ simple_fwd_config_setup(void) fwd_streams[i]->tx_port = fwd_ports_ids[j]; fwd_streams[i]->tx_queue = 0; fwd_streams[i]->peer_addr = j; + fwd_streams[i]->retry_enabled = retry_enabled; if (port_topology == PORT_TOPOLOGY_PAIRED) { fwd_streams[j]->rx_port = fwd_ports_ids[j]; @@ -1138,6 +1140,7 @@ simple_fwd_config_setup(void) fwd_streams[j]->tx_port = fwd_ports_ids[i]; fwd_streams[j]->tx_queue = 0; fwd_streams[j]->peer_addr = i; + fwd_streams[j]->retry_enabled = retry_enabled; } } } @@ -1206,6 +1209,7 @@ rss_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; rxq = (queueid_t) (rxq + 1); if (rxq < nb_q) continue; @@ -1280,6 +1284,7 @@ dcb_fwd_config_setup(void) fs->tx_port = fwd_ports_ids[txp]; fs->tx_queue = txq + j % nb_tx_queue; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; } fwd_lcores[lc_id]->stream_nb += rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue; @@ -1350,6 +1355,7 @@ icmp_echo_config_setup(void) fs->tx_port = fs->rx_port; fs->tx_queue = rxq; fs->peer_addr = fs->tx_port; + fs->retry_enabled = retry_enabled; if (verbose_level > 0) printf(" stream=%d port=%d rxq=%d txq=%d\n", sm_id, fs->rx_port, fs->rx_queue, @@ -1388,14 +1394,15 @@ pkt_fwd_config_display(struct fwd_config *cfg) lcoreid_t lc_id; streamid_t sm_id; - printf("%s packet forwarding - ports=%d - cores=%d - streams=%d - " + printf("%s packet forwarding%s - ports=%d - cores=%d - streams=%d - " "NUMA support %s, MP over anonymous pages %s\n", cfg->fwd_eng->fwd_mode_name, + retry_enabled == 0 ? "" : " with retry", cfg->nb_fwd_ports, cfg->nb_fwd_lcores, cfg->nb_fwd_streams, numa_support == 1 ? "enabled" : "disabled", mp_anon != 0 ? "enabled" : "disabled"); - if (strcmp(cfg->fwd_eng->fwd_mode_name, "mac_retry") == 0) + if (retry_enabled) printf("TX retry num: %u, delay between TX retries: %uus\n", burst_tx_retry_num, burst_tx_delay_time); for (lc_id = 0; lc_id < cfg->nb_fwd_lcores; lc_id++) { @@ -1684,6 +1691,31 @@ list_pkt_forwarding_modes(void) return fwd_modes; } +char* +list_pkt_forwarding_retry_modes(void) +{ + static char fwd_modes[128] = ""; + const char *separator = "|"; + struct fwd_engine *fwd_eng; + unsigned i = 0; + + if (strlen(fwd_modes) == 0) { + while ((fwd_eng = fwd_engines[i++]) != NULL) { + if (fwd_eng == &rx_only_engine) + continue; + strncat(fwd_modes, fwd_eng->fwd_mode_name, + sizeof(fwd_modes) - + strlen(fwd_modes) - 1); + strncat(fwd_modes, separator, + sizeof(fwd_modes) - + strlen(fwd_modes) - 1); + } + fwd_modes[strlen(fwd_modes) - strlen(separator)] = '\0'; + } + + return fwd_modes; +} + void set_pkt_forwarding_mode(const char *fwd_mode_name) { @@ -1693,8 +1725,9 @@ set_pkt_forwarding_mode(const char *fwd_mode_name) i = 0; while ((fwd_eng = fwd_engines[i]) != NULL) { if (! strcmp(fwd_eng->fwd_mode_name, fwd_mode_name)) { - printf("Set %s packet forwarding mode\n", - fwd_mode_name); + printf("Set %s packet forwarding mode%s\n", + fwd_mode_name, + retry_enabled == 0 ? "" : " with retry"); cur_fwd_eng = fwd_eng; return; } diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 1d6cda1..ac4bd8f 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -643,6 +643,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t i; uint64_t ol_flags; uint16_t testpmd_ol_flags; + uint32_t retry; uint32_t rx_bad_ip_csum; uint32_t rx_bad_l4_csum; struct testpmd_offload_info info; @@ -848,6 +849,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) } } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; fs->rx_bad_ip_csum += rx_bad_ip_csum; fs->rx_bad_l4_csum += rx_bad_l4_csum; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 18b754b..a6abe91 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -131,6 +131,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs) uint16_t nb_tx; uint16_t nb_pkt; uint16_t i; + uint32_t retry; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -207,6 +208,17 @@ pkt_burst_flow_gen(struct fwd_stream *fs) } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index ed6e924..be308c9 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -311,6 +311,7 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) struct ipv4_hdr *ip_h; struct icmp_hdr *icmp_h; struct ether_addr eth_addr; + uint32_t retry; uint32_t ip_addr; uint16_t nb_rx; uint16_t nb_tx; @@ -518,6 +519,20 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) if (nb_replies > 0) { nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_replies); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_replies) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_replies && + retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, + fs->tx_queue, + &pkts_burst[nb_tx], + nb_replies - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 8840d86..7b6033a 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -80,6 +80,8 @@ pkt_burst_io_forward(struct fwd_stream *fs) struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; uint16_t nb_rx; uint16_t nb_tx; + uint32_t retry; + #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -93,16 +95,28 @@ pkt_burst_io_forward(struct fwd_stream *fs) /* * Receive a burst of packets and forward them. */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); + nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, + pkts_burst, nb_pkt_per_burst); if (unlikely(nb_rx == 0)) return; + fs->rx_packets += nb_rx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; #endif - fs->rx_packets += nb_rx; - nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/macfwd-retry.c b/app/test-pmd/macfwd-retry.c deleted file mode 100644 index d8cd069..0000000 --- a/app/test-pmd/macfwd-retry.c +++ /dev/null @@ -1,167 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include <stdarg.h> -#include <string.h> -#include <stdio.h> -#include <errno.h> -#include <stdint.h> -#include <unistd.h> -#include <inttypes.h> - -#include <sys/queue.h> -#include <sys/stat.h> - -#include <rte_common.h> -#include <rte_byteorder.h> -#include <rte_log.h> -#include <rte_debug.h> -#include <rte_cycles.h> -#include <rte_memory.h> -#include <rte_memcpy.h> -#include <rte_memzone.h> -#include <rte_launch.h> -#include <rte_eal.h> -#include <rte_per_lcore.h> -#include <rte_lcore.h> -#include <rte_atomic.h> -#include <rte_branch_prediction.h> -#include <rte_ring.h> -#include <rte_memory.h> -#include <rte_mempool.h> -#include <rte_mbuf.h> -#include <rte_interrupts.h> -#include <rte_pci.h> -#include <rte_ether.h> -#include <rte_ethdev.h> -#include <rte_ip.h> -#include <rte_string_fns.h> - -#include "testpmd.h" - -#define BURST_TX_WAIT_US 10 -#define BURST_TX_RETRIES 5 - -/* - * Global variables that control number of retires and - * timeout (in us) between retires. - */ -uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; -uint32_t burst_tx_retry_num = BURST_TX_RETRIES; - -/* - * Forwarding of packets in MAC mode with a wait and retry on TX to reduce packet loss. - * Change the source and the destination Ethernet addressed of packets - * before forwarding them. - */ -static void -pkt_burst_mac_retry_forward(struct fwd_stream *fs) -{ - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - struct rte_mbuf *mb; - struct ether_hdr *eth_hdr; - uint32_t retry; - uint16_t nb_rx; - uint16_t nb_tx; - uint16_t i; -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - uint64_t start_tsc; - uint64_t end_tsc; - uint64_t core_cycles; -#endif - -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - start_tsc = rte_rdtsc(); -#endif - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - if (unlikely(nb_rx == 0)) - return; - -#ifdef RTE_TEST_PMD_RECORD_BURST_STATS - fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; -#endif - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) { - if (likely(i < nb_rx - 1)) - rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i + 1], - void *)); - mb = pkts_burst[i]; - eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *); - ether_addr_copy(&peer_eth_addrs[fs->peer_addr], - ð_hdr->d_addr); - ether_addr_copy(&ports[fs->tx_port].eth_addr, - ð_hdr->s_addr); - } - nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); - - /* - * If not all packets have been TX'd then wait and retry. - */ - if (unlikely(nb_tx < nb_rx)) { - for (retry = 0; retry < burst_tx_retry_num; retry++) { - rte_delay_us(burst_tx_delay_time); - nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, - &pkts_burst[nb_tx], nb_rx - nb_tx); - if (nb_tx == nb_rx) - break; - } - } - - fs->tx_packets += nb_tx; -#ifdef RTE_TEST_PMD_RECORD_BURST_STATS - fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; -#endif - if (unlikely(nb_tx < nb_rx)) { - fs->fwd_dropped += (nb_rx - nb_tx); - do { - rte_pktmbuf_free(pkts_burst[nb_tx]); - } while (++nb_tx < nb_rx); - } -#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES - end_tsc = rte_rdtsc(); - core_cycles = (end_tsc - start_tsc); - fs->core_cycles = (uint64_t) (fs->core_cycles + core_cycles); -#endif -} - -struct fwd_engine mac_retry_fwd_engine = { - .fwd_mode_name = "mac_retry", - .port_fwd_begin = NULL, - .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_retry_forward, -}; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 07a399a..5d1c161 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -81,6 +81,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) struct rte_port *txp; struct rte_mbuf *mb; struct ether_hdr *eth_hdr; + uint32_t retry; uint16_t nb_rx; uint16_t nb_tx; uint16_t i; @@ -129,6 +130,18 @@ pkt_burst_mac_forward(struct fwd_stream *fs) mb->vlan_tci_outer = txp->tx_vlan_id_outer; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } + fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index c10f4b5..4b0dbeb 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -84,6 +84,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) uint16_t nb_rx; uint16_t nb_tx; uint16_t i; + uint32_t retry; uint64_t ol_flags = 0; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; @@ -131,6 +132,17 @@ pkt_burst_mac_swap(struct fwd_stream *fs) mb->vlan_tci_outer = txp->tx_vlan_id_outer; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_rx - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index dd6b046..dfd27d5 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -144,7 +144,6 @@ streamid_t nb_fwd_streams; /**< Is equal to (nb_ports * nb_rxq). */ struct fwd_engine * fwd_engines[] = { &io_fwd_engine, &mac_fwd_engine, - &mac_retry_fwd_engine, &mac_swap_engine, &flow_gen_engine, &rx_only_engine, @@ -159,6 +158,9 @@ struct fwd_engine * fwd_engines[] = { struct fwd_config cur_fwd_config; struct fwd_engine *cur_fwd_eng = &io_fwd_engine; /**< IO mode by default. */ +uint32_t retry_enabled; +uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; +uint32_t burst_tx_retry_num = BURST_TX_RETRIES; uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */ uint32_t param_total_num_mbufs = 0; /**< number of mbufs in all pools - if diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 0f72ca1..62ec055 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -103,6 +103,8 @@ struct fwd_stream { queueid_t tx_queue; /**< TX queue to send forwarded packets */ streamid_t peer_addr; /**< index of peer ethernet address of packets */ + unsigned int retry_enabled; + /* "read-write" results */ unsigned int rx_packets; /**< received packets */ unsigned int tx_packets; /**< received packets transmitted */ @@ -220,9 +222,14 @@ struct fwd_engine { packet_fwd_t packet_fwd; /**< Mandatory. */ }; +#define BURST_TX_WAIT_US 1 +#define BURST_TX_RETRIES 64 + +extern uint32_t burst_tx_delay_time; +extern uint32_t burst_tx_retry_num; + extern struct fwd_engine io_fwd_engine; extern struct fwd_engine mac_fwd_engine; -extern struct fwd_engine mac_retry_fwd_engine; extern struct fwd_engine mac_swap_engine; extern struct fwd_engine flow_gen_engine; extern struct fwd_engine rx_only_engine; @@ -380,6 +387,7 @@ extern int8_t tx_wthresh; extern struct fwd_config cur_fwd_config; extern struct fwd_engine *cur_fwd_eng; +extern uint32_t retry_enabled; extern struct fwd_lcore **fwd_lcores; extern struct fwd_stream **fwd_streams; @@ -523,6 +531,7 @@ void show_tx_pkt_segments(void); void set_tx_pkt_split(const char *name); void set_nb_pkt_per_burst(uint16_t pkt_burst); char *list_pkt_forwarding_modes(void); +char *list_pkt_forwarding_retry_modes(void); void set_pkt_forwarding_mode(const char *fwd_mode); void start_packet_forwarding(int with_tx_first); void stop_packet_forwarding(void); diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 0ac2a08..11fd681 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -193,6 +193,7 @@ pkt_burst_transmit(struct fwd_stream *fs) uint16_t nb_tx; uint16_t nb_pkt; uint16_t vlan_tci, vlan_tci_outer; + uint32_t retry; uint64_t ol_flags = 0; uint8_t i; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES @@ -273,6 +274,17 @@ pkt_burst_transmit(struct fwd_stream *fs) pkts_burst[nb_pkt] = pkt; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); + /* + * Retry if necessary + */ + if (unlikely(nb_tx < nb_pkt) && fs->retry_enabled) { + retry = 0; + while (nb_tx < nb_pkt && retry++ < burst_tx_retry_num) { + rte_delay_us(burst_tx_delay_time); + nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue, + &pkts_burst[nb_tx], nb_pkt - nb_tx); + } + } fs->tx_packets += nb_tx; #ifdef RTE_TEST_PMD_RECORD_BURST_STATS diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 8fb0651..7712bd2 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -333,7 +333,6 @@ The commandline options are: io (the default) mac - mac_retry mac_swap flowgen rxonly diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 22bb108..d812989 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -249,8 +249,10 @@ set fwd Set the packet forwarding mode:: - testpmd> set fwd (io|mac|mac_retry|macswap|flowgen| \ - rxonly|txonly|csum|icmpecho) + testpmd> set fwd (io|mac|macswap|flowgen| \ + rxonly|txonly|csum|icmpecho) (""|retry) + +``retry`` can be specified for forwarding engines except ``rx_only``. The available information categories are: @@ -260,8 +262,6 @@ The available information categories are: * ``mac``: Changes the source and the destination Ethernet addresses of packets before forwarding them. -* ``mac_retry``: Same as "mac" forwarding mode, but includes retries if the destination queue is full. - * ``macswap``: MAC swap forwarding mode. Swaps the source and the destination Ethernet addresses of packets before forwarding them. @@ -392,9 +392,9 @@ Set number of packets per burst:: This is equivalent to the ``--burst command-line`` option. -In ``mac_retry`` forwarding mode, the transmit delay time and number of retries can also be set:: +When retry is enabled, the transmit delay time and number of retries can also be set:: - testpmd> set burst tx delay (micrseconds) retry (num) + testpmd> set burst tx delay (microseconds) retry (num) set txpkts ~~~~~~~~~~ -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 2/5] testpmd: configurable tx_first burst number 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 3/5] testpmd: show throughput in port stats Zhihong Wang ` (3 subsequent siblings) 5 siblings, 0 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch enables configurable tx_first burst number. Use "start tx_first (burst_num)" to specify how many bursts of packets to be sent before forwarding start, or "start tx_first" like before for the default 1 burst send. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> --- app/test-pmd/cmdline.c | 41 +++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 7 +++-- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 6 +++-- 3 files changed, 50 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index e414c0f..9e0b518 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -5386,6 +5386,46 @@ cmdline_parse_inst_t cmd_start_tx_first = { }, }; +/* *** START FORWARDING WITH N TX BURST FIRST *** */ +struct cmd_start_tx_first_n_result { + cmdline_fixed_string_t start; + cmdline_fixed_string_t tx_first; + uint32_t tx_num; +}; + +static void +cmd_start_tx_first_n_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_start_tx_first_n_result *res = parsed_result; + + start_packet_forwarding(res->tx_num); +} + +cmdline_parse_token_string_t cmd_start_tx_first_n_start = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + start, "start"); +cmdline_parse_token_string_t cmd_start_tx_first_n_tx_first = + TOKEN_STRING_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_first, "tx_first"); +cmdline_parse_token_num_t cmd_start_tx_first_n_tx_num = + TOKEN_NUM_INITIALIZER(struct cmd_start_tx_first_n_result, + tx_num, UINT32); + +cmdline_parse_inst_t cmd_start_tx_first_n = { + .f = cmd_start_tx_first_n_parsed, + .data = NULL, + .help_str = "start packet forwarding, after sending <num> " + "bursts of packets", + .tokens = { + (void *)&cmd_start_tx_first_n_start, + (void *)&cmd_start_tx_first_n_tx_first, + (void *)&cmd_start_tx_first_n_tx_num, + NULL, + }, +}; + /* *** SET LINK UP *** */ struct cmd_set_link_up_result { cmdline_fixed_string_t set; @@ -10542,6 +10582,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_showcfg, (cmdline_parse_inst_t *)&cmd_start, (cmdline_parse_inst_t *)&cmd_start_tx_first, + (cmdline_parse_inst_t *)&cmd_start_tx_first_n, (cmdline_parse_inst_t *)&cmd_set_link_up, (cmdline_parse_inst_t *)&cmd_set_link_down, (cmdline_parse_inst_t *)&cmd_reset, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index dfd27d5..74b044e 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1049,8 +1049,11 @@ start_packet_forwarding(int with_tx_first) for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) (*port_fwd_begin)(fwd_ports_ids[i]); } - launch_packet_forwarding(run_one_txonly_burst_on_core); - rte_eal_mp_wait_lcore(); + while (with_tx_first--) { + launch_packet_forwarding( + run_one_txonly_burst_on_core); + rte_eal_mp_wait_lcore(); + } port_fwd_end = tx_only_engine.port_fwd_end; if (port_fwd_end != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index d812989..4e19229 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -98,9 +98,11 @@ Start packet forwarding with current configuration:: start tx_first ~~~~~~~~~~~~~~ -Start packet forwarding with current configuration after sending one burst of packets:: +Start packet forwarding with current configuration after sending specified number of bursts of packets:: - testpmd> start tx_first + testpmd> start tx_first (""|burst_num) + +The default burst number is 1 when ``burst_num`` not presented. stop ~~~~ -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 3/5] testpmd: show throughput in port stats 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 2/5] testpmd: configurable tx_first burst number Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang ` (2 subsequent siblings) 5 siblings, 0 replies; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch adds throughput numbers (in the period since last use of this command) in port statistics display for "show port stats (port_id|all)". Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a85bb5f..ede7c78 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -92,6 +92,7 @@ #include <rte_ether.h> #include <rte_ethdev.h> #include <rte_string_fns.h> +#include <rte_cycles.h> #include "testpmd.h" @@ -150,6 +151,11 @@ print_ethaddr(const char *name, struct ether_addr *eth_addr) void nic_stats_display(portid_t port_id) { + static uint64_t prev_pkts_rx[RTE_MAX_ETHPORTS]; + static uint64_t prev_pkts_tx[RTE_MAX_ETHPORTS]; + static uint64_t prev_cycles[RTE_MAX_ETHPORTS]; + uint64_t diff_pkts_rx, diff_pkts_tx, diff_cycles; + uint64_t mpps_rx, mpps_tx; struct rte_eth_stats stats; struct rte_port *port = &ports[port_id]; uint8_t i; @@ -209,6 +215,23 @@ nic_stats_display(portid_t port_id) } } + diff_cycles = prev_cycles[port_id]; + prev_cycles[port_id] = rte_rdtsc(); + if (diff_cycles > 0) + diff_cycles = prev_cycles[port_id] - diff_cycles; + + diff_pkts_rx = stats.ipackets - prev_pkts_rx[port_id]; + diff_pkts_tx = stats.opackets - prev_pkts_tx[port_id]; + prev_pkts_rx[port_id] = stats.ipackets; + prev_pkts_tx[port_id] = stats.opackets; + mpps_rx = diff_cycles > 0 ? + diff_pkts_rx * rte_get_tsc_hz() / diff_cycles : 0; + mpps_tx = diff_cycles > 0 ? + diff_pkts_tx * rte_get_tsc_hz() / diff_cycles : 0; + printf("\n Throughput (since last show)\n"); + printf(" Rx-pps: %12"PRIu64"\n Tx-pps: %12"PRIu64"\n", + mpps_rx, mpps_tx); + printf(" %s############################%s\n", nic_stats_border, nic_stats_border); } -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang ` (2 preceding siblings ...) 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 3/5] testpmd: show throughput in port stats Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-27 14:23 ` Nélio Laranjeiro 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start Zhihong Wang 2016-06-15 10:04 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility De Lara Guarch, Pablo 5 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch removes constraints in rxq handling when multiqueue is enabled to handle all the rxqs. Current testpmd forces a dedicated core for each rxq, some rxqs may be ignored when core number is less than rxq number, and that causes confusion and inconvenience. One example: One Red Hat engineer was doing multiqueue test, there're 2 ports in guest each with 4 queues, and testpmd was used as the forwarding engine in guest, as usual he used 1 core for forwarding, as a results he only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of emails and quite some time are spent to root cause it, and of course it's caused by this unreasonable testpmd behavior. Moreover, even if we understand this behavior, if we want to test the above case, we still need 8 cores for a single guest to poll all the rxqs, obviously this is too expensive. We met quite a lot cases like this, one recent example: http://openvswitch.org/pipermail/dev/2016-June/072110.html Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> --- app/test-pmd/config.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index ede7c78..4719a08 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) cur_fwd_config.nb_fwd_ports = nb_fwd_ports; cur_fwd_config.nb_fwd_streams = (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) - cur_fwd_config.nb_fwd_streams = - (streamid_t)cur_fwd_config.nb_fwd_lcores; - else - cur_fwd_config.nb_fwd_lcores = - (lcoreid_t)cur_fwd_config.nb_fwd_streams; /* reinitialize forwarding streams */ init_fwd_streams(); setup_fwd_config_of_each_lcore(&cur_fwd_config); rxp = 0; rxq = 0; - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { struct fwd_stream *fs; fs = fwd_streams[lc_id]; -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-06-27 14:23 ` Nélio Laranjeiro 2016-06-27 22:36 ` De Lara Guarch, Pablo 0 siblings, 1 reply; 54+ messages in thread From: Nélio Laranjeiro @ 2016-06-27 14:23 UTC (permalink / raw) To: Zhihong Wang Cc: dev, konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > This patch removes constraints in rxq handling when multiqueue is enabled > to handle all the rxqs. > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > ignored when core number is less than rxq number, and that causes confusion > and inconvenience. > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > ports in guest each with 4 queues, and testpmd was used as the forwarding > engine in guest, as usual he used 1 core for forwarding, as a results he > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > emails and quite some time are spent to root cause it, and of course it's > caused by this unreasonable testpmd behavior. > > Moreover, even if we understand this behavior, if we want to test the > above case, we still need 8 cores for a single guest to poll all the > rxqs, obviously this is too expensive. > > We met quite a lot cases like this, one recent example: > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > --- > app/test-pmd/config.c | 8 +------- > 1 file changed, 1 insertion(+), 7 deletions(-) > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index ede7c78..4719a08 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > cur_fwd_config.nb_fwd_streams = > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > - cur_fwd_config.nb_fwd_streams = > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > - else > - cur_fwd_config.nb_fwd_lcores = > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > /* reinitialize forwarding streams */ > init_fwd_streams(); > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > rxp = 0; rxq = 0; > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > struct fwd_stream *fs; > > fs = fwd_streams[lc_id]; > -- > 2.5.0 Hi Zhihong, It seems this commits introduce a bug in pkt_burst_transmit(), this only occurs when the number of cores present in the coremask is greater than the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000 Mbps - full-duplex Done testpmd> start tx_first io packet forwarding - CRC stripping disabled - packets/burst=64 nb forwarding cores=10 - nb forwarding ports=2 RX queues=4 - RX desc=256 - RX free threshold=0 RX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX queues=4 - TX desc=256 - TX free threshold=0 TX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 Segmentation fault (core dumped) If I start testpmd with a coremask with at most as many cores as queues, everything works well (i.e. coremask=0xff0, or 0xf00). Are you able to reproduce the same issue? Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). Regards, -- Nélio Laranjeiro 6WIND ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup 2016-06-27 14:23 ` Nélio Laranjeiro @ 2016-06-27 22:36 ` De Lara Guarch, Pablo 2016-06-28 8:34 ` Nélio Laranjeiro 0 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-27 22:36 UTC (permalink / raw) To: Nélio Laranjeiro, Wang, Zhihong Cc: dev, Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon Hi Nelio, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > Sent: Monday, June 27, 2016 3:24 PM > To: Wang, Zhihong > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > Pablo; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > This patch removes constraints in rxq handling when multiqueue is enabled > > to handle all the rxqs. > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > ignored when core number is less than rxq number, and that causes > confusion > > and inconvenience. > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > engine in guest, as usual he used 1 core for forwarding, as a results he > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > emails and quite some time are spent to root cause it, and of course it's > > caused by this unreasonable testpmd behavior. > > > > Moreover, even if we understand this behavior, if we want to test the > > above case, we still need 8 cores for a single guest to poll all the > > rxqs, obviously this is too expensive. > > > > We met quite a lot cases like this, one recent example: > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > --- > > app/test-pmd/config.c | 8 +------- > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > index ede7c78..4719a08 100644 > > --- a/app/test-pmd/config.c > > +++ b/app/test-pmd/config.c > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > cur_fwd_config.nb_fwd_streams = > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > - cur_fwd_config.nb_fwd_streams = > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > - else > > - cur_fwd_config.nb_fwd_lcores = > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > /* reinitialize forwarding streams */ > > init_fwd_streams(); > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > rxp = 0; rxq = 0; > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > struct fwd_stream *fs; > > > > fs = fwd_streams[lc_id]; > > -- > > 2.5.0 > > Hi Zhihong, > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > occurs when the number of cores present in the coremask is greater than > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > Port 0 Link Up - speed 40000 Mbps - full-duplex > Port 1 Link Up - speed 40000 Mbps - full-duplex > Done > testpmd> start tx_first > io packet forwarding - CRC stripping disabled - packets/burst=64 > nb forwarding cores=10 - nb forwarding ports=2 > RX queues=4 - RX desc=256 - RX free threshold=0 > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > TX queues=4 - TX desc=256 - TX free threshold=0 > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > TX RS bit threshold=0 - TXQ flags=0x0 > Segmentation fault (core dumped) > > > If I start testpmd with a coremask with at most as many cores as queues, > everything works well (i.e. coremask=0xff0, or 0xf00). > > Are you able to reproduce the same issue? > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). Thanks for reporting this. I was able to reproduce this issue and sent a patch that should fix it. Could you verify it? http://dpdk.org/dev/patchwork/patch/14430/ Thanks Pablo > > Regards, > > -- > Nélio Laranjeiro > 6WIND ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup 2016-06-27 22:36 ` De Lara Guarch, Pablo @ 2016-06-28 8:34 ` Nélio Laranjeiro 2016-06-28 11:10 ` Wang, Zhihong 0 siblings, 1 reply; 54+ messages in thread From: Nélio Laranjeiro @ 2016-06-28 8:34 UTC (permalink / raw) To: De Lara Guarch, Pablo Cc: Wang, Zhihong, dev, Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon Hi Pablo, On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > Hi Nelio, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > > Sent: Monday, June 27, 2016 3:24 PM > > To: Wang, Zhihong > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > > Pablo; thomas.monjalon@6wind.com > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > to handle all the rxqs. > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > ignored when core number is less than rxq number, and that causes > > confusion > > > and inconvenience. > > > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > > engine in guest, as usual he used 1 core for forwarding, as a results he > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > > emails and quite some time are spent to root cause it, and of course it's > > > caused by this unreasonable testpmd behavior. > > > > > > Moreover, even if we understand this behavior, if we want to test the > > > above case, we still need 8 cores for a single guest to poll all the > > > rxqs, obviously this is too expensive. > > > > > > We met quite a lot cases like this, one recent example: > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > > --- > > > app/test-pmd/config.c | 8 +------- > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > index ede7c78..4719a08 100644 > > > --- a/app/test-pmd/config.c > > > +++ b/app/test-pmd/config.c > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > > cur_fwd_config.nb_fwd_streams = > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > - cur_fwd_config.nb_fwd_streams = > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > - else > > > - cur_fwd_config.nb_fwd_lcores = > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > /* reinitialize forwarding streams */ > > > init_fwd_streams(); > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > rxp = 0; rxq = 0; > > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > > struct fwd_stream *fs; > > > > > > fs = fwd_streams[lc_id]; > > > -- > > > 2.5.0 > > > > Hi Zhihong, > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > > occurs when the number of cores present in the coremask is greater than > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > Done > > testpmd> start tx_first > > io packet forwarding - CRC stripping disabled - packets/burst=64 > > nb forwarding cores=10 - nb forwarding ports=2 > > RX queues=4 - RX desc=256 - RX free threshold=0 > > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX queues=4 - TX desc=256 - TX free threshold=0 > > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX RS bit threshold=0 - TXQ flags=0x0 > > Segmentation fault (core dumped) > > > > > > If I start testpmd with a coremask with at most as many cores as queues, > > everything works well (i.e. coremask=0xff0, or 0xf00). > > > > Are you able to reproduce the same issue? > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > Thanks for reporting this. I was able to reproduce this issue and > sent a patch that should fix it. Could you verify it? > http://dpdk.org/dev/patchwork/patch/14430/ I have tested it, it works, I will add a test report on the corresponding email. Thanks > > > Thanks > Pablo > > > > Regards, -- Nélio Laranjeiro 6WIND ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup 2016-06-28 8:34 ` Nélio Laranjeiro @ 2016-06-28 11:10 ` Wang, Zhihong 0 siblings, 0 replies; 54+ messages in thread From: Wang, Zhihong @ 2016-06-28 11:10 UTC (permalink / raw) To: Nélio Laranjeiro, De Lara Guarch, Pablo Cc: dev, Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon Thanks Nelio and Pablo! > -----Original Message----- > From: Nélio Laranjeiro [mailto:nelio.laranjeiro@6wind.com] > Sent: Tuesday, June 28, 2016 4:34 PM > To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com> > Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > Hi Pablo, > > On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > > Hi Nelio, > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > > > Sent: Monday, June 27, 2016 3:24 PM > > > To: Wang, Zhihong > > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > > > Pablo; thomas.monjalon@6wind.com > > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > > to handle all the rxqs. > > > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > > ignored when core number is less than rxq number, and that causes > > > confusion > > > > and inconvenience. > > > > > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > > > engine in guest, as usual he used 1 core for forwarding, as a results he > > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > > > emails and quite some time are spent to root cause it, and of course it's > > > > caused by this unreasonable testpmd behavior. > > > > > > > > Moreover, even if we understand this behavior, if we want to test the > > > > above case, we still need 8 cores for a single guest to poll all the > > > > rxqs, obviously this is too expensive. > > > > > > > > We met quite a lot cases like this, one recent example: > > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > > > --- > > > > app/test-pmd/config.c | 8 +------- > > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > > index ede7c78..4719a08 100644 > > > > --- a/app/test-pmd/config.c > > > > +++ b/app/test-pmd/config.c > > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > > > cur_fwd_config.nb_fwd_streams = > > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > > - cur_fwd_config.nb_fwd_streams = > > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > > - else > > > > - cur_fwd_config.nb_fwd_lcores = > > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > > > /* reinitialize forwarding streams */ > > > > init_fwd_streams(); > > > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > > rxp = 0; rxq = 0; > > > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > > > struct fwd_stream *fs; > > > > > > > > fs = fwd_streams[lc_id]; > > > > -- > > > > 2.5.0 > > > > > > Hi Zhihong, > > > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > > > occurs when the number of cores present in the coremask is greater than > > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > > Done > > > testpmd> start tx_first > > > io packet forwarding - CRC stripping disabled - packets/burst=64 > > > nb forwarding cores=10 - nb forwarding ports=2 > > > RX queues=4 - RX desc=256 - RX free threshold=0 > > > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > > TX queues=4 - TX desc=256 - TX free threshold=0 > > > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > > TX RS bit threshold=0 - TXQ flags=0x0 > > > Segmentation fault (core dumped) > > > > > > > > > If I start testpmd with a coremask with at most as many cores as queues, > > > everything works well (i.e. coremask=0xff0, or 0xf00). > > > > > > Are you able to reproduce the same issue? > > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > > > Thanks for reporting this. I was able to reproduce this issue and > > sent a patch that should fix it. Could you verify it? > > http://dpdk.org/dev/patchwork/patch/14430/ > > > I have tested it, it works, I will add a test report on the > corresponding email. > > Thanks > > > > > > Thanks > > Pablo > > > > > > Regards, > > -- > Nélio Laranjeiro > 6WIND ^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang ` (3 preceding siblings ...) 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang @ 2016-06-14 23:08 ` Zhihong Wang 2016-06-16 11:09 ` De Lara Guarch, Pablo 2016-06-15 10:04 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility De Lara Guarch, Pablo 5 siblings, 1 reply; 54+ messages in thread From: Zhihong Wang @ 2016-06-14 23:08 UTC (permalink / raw) To: dev Cc: konstantin.ananyev, bruce.richardson, pablo.de.lara.guarch, thomas.monjalon, Zhihong Wang This patch show topology at forwarding start. "show config fwd" also does this, but showing it directly can reduce the possibility of misconfiguration. Currently fwd_config_display() calls fwd_config_setup(), this misleading behavior will be fixed in other patches. Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> --- app/test-pmd/testpmd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 74b044e..50dddbe 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1016,6 +1016,7 @@ start_packet_forwarding(int with_tx_first) flush_fwd_rx_queues(); fwd_config_setup(); + fwd_config_display(); rxtx_config_display(); for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { -- 2.5.0 ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start Zhihong Wang @ 2016-06-16 11:09 ` De Lara Guarch, Pablo 2016-06-16 13:33 ` Thomas Monjalon 0 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-16 11:09 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 15, 2016 12:08 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com; Wang, Zhihong > Subject: [PATCH v3 5/5] testpmd: show topology at forwarding start > > This patch show topology at forwarding start. > > "show config fwd" also does this, but showing it directly can reduce the > possibility of misconfiguration. > > Currently fwd_config_display() calls fwd_config_setup(), this misleading > behavior will be fixed in other patches. > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> > --- > app/test-pmd/testpmd.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > index 74b044e..50dddbe 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -1016,6 +1016,7 @@ start_packet_forwarding(int with_tx_first) > flush_fwd_rx_queues(); > > fwd_config_setup(); > + fwd_config_display(); > rxtx_config_display(); > > for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { > -- > 2.5.0 Already acked this, but note that fwd_config_display() has been renamed to pkt_fwd_config_display(). Thomas, can you make that change when merging this? Thanks, Pablo ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start 2016-06-16 11:09 ` De Lara Guarch, Pablo @ 2016-06-16 13:33 ` Thomas Monjalon 0 siblings, 0 replies; 54+ messages in thread From: Thomas Monjalon @ 2016-06-16 13:33 UTC (permalink / raw) To: De Lara Guarch, Pablo Cc: Wang, Zhihong, dev, Ananyev, Konstantin, Richardson, Bruce 2016-06-16 11:09, De Lara Guarch, Pablo: > > --- a/app/test-pmd/testpmd.c > > +++ b/app/test-pmd/testpmd.c > > @@ -1016,6 +1016,7 @@ start_packet_forwarding(int with_tx_first) > > flush_fwd_rx_queues(); > > > > fwd_config_setup(); > > + fwd_config_display(); > > rxtx_config_display(); > > > > for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { > > -- > > 2.5.0 > > Already acked this, but note that fwd_config_display() has been renamed to pkt_fwd_config_display(). > Thomas, can you make that change when merging this? Yes done :) ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang ` (4 preceding siblings ...) 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start Zhihong Wang @ 2016-06-15 10:04 ` De Lara Guarch, Pablo 2016-06-16 14:36 ` Thomas Monjalon 5 siblings, 1 reply; 54+ messages in thread From: De Lara Guarch, Pablo @ 2016-06-15 10:04 UTC (permalink / raw) To: Wang, Zhihong, dev Cc: Ananyev, Konstantin, Richardson, Bruce, thomas.monjalon > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 15, 2016 12:08 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com > Subject: [PATCH v3 0/5] vhost/virtio performance loopback utility > > This patch enables vhost/virtio pmd performance loopback test in testpmd. > All the features are for general usage. > > The loopback test focuses on the maximum full-path packet forwarding > performance between host and guest, it runs vhost/virtio pmd only without > introducing extra overhead. > > Therefore, the main requirement is traffic generation, since there's no > other packet generators like IXIA to help. > > In current testpmd, iofwd is the best candidate to perform this loopback > test because it's the fastest possible forwarding engine: Start testpmd > iofwd in host with 1 vhost port, and start testpmd iofwd in the connected > guest with 1 corresponding virtio port, and these 2 ports form a forwarding > loop: Host vhost Tx -> Guest virtio Rx -> Guest virtio Tx -> Host vhost Rx. > > As to traffic generation, "start tx_first" injects a burst of packets into > the loop. > > However 2 issues remain: > > 1. If only 1 burst of packets are injected in the loop, there will > definitely be empty Rx operations, e.g. When guest virtio port send > burst to the host, then it starts the Rx immediately, it's likely > the packets are still being forwarded by host vhost port and haven't > reached the guest yet. > > We need to fill up the ring to keep all pmds busy. > > 2. iofwd doesn't provide retry mechanism, so if packet loss occurs, > there won't be a full burst in the loop. > > To address these issues, this patch: > > 1. Add retry option in testpmd to prevent most packet losses. > > 2. Add parameter to enable configurable tx_first burst number. > > Other related improvements include: > > 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a > single core for each rxq which causes inconvenience and confusion. > > This change doesn't break anything, we can still force a single core > for each rxq, by giving the same number of cores with the number of > rxqs. > > One example: One Red Hat engineer was doing multiqueue test, there're > 2 ports in guest each with 4 queues, and testpmd was used as the > forwarding engine in guest, as usual he used 1 core for forwarding, as > a results he only saw traffic from port 0 queue 0 to port 1 queue 0, > then a lot of emails and quite some time are spent to root cause it, > and of course it's caused by this unreasonable testpmd behavior. > > Moreover, even if we understand this behavior, if we want to test the > above case, we still need 8 cores for a single guest to poll all the > rxqs, obviously this is too expensive. > > We met quite a lot cases like this, one recent example: > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > 2. Show topology at forwarding start: "show config fwd" also does this, > but show it directly can reduce the possibility of mis-configuration. > > Like the case above, if testpmd shows topology at forwarding start, > then probably all those debugging efforts can be saved. > > 3. Add throughput information in port statistics display for "show port > stats (port_id|all)". > > Finally there's documentation update. > > Example on how to enable vhost/virtio performance loopback test: > > 1. Start testpmd in host with 1 vhost port only. > > 2. Start testpmd in guest with only 1 virtio port connected to the > corresponding vhost port. > > 3. "set fwd io retry" in testpmds in both host and guest. > > 4. "start" in testpmd in guest. > > 5. "start tx_first 16" in testpmd in host. > > Then use "show port stats all" to monitor the performance. > > -------------- > Changes in v2: > > 1. Add retry as an option for existing forwarding engines except rxonly. > > 2. Minor code adjustment and more detailed patch description. > > -------------- > Changes in v3: > > 1. Add more details in commit log. > > 2. Give variables more meaningful names. > > 3. Fix a typo in existing doc. > > 4. Rebase the patches. > > > Zhihong Wang (5): > testpmd: add retry option > testpmd: configurable tx_first burst number > testpmd: show throughput in port stats > testpmd: handle all rxqs in rss setup > testpmd: show topology at forwarding start > > app/test-pmd/Makefile | 1 - > app/test-pmd/cmdline.c | 116 ++++++++++++++++++- > app/test-pmd/config.c | 74 ++++++++++-- > app/test-pmd/csumonly.c | 12 ++ > app/test-pmd/flowgen.c | 12 ++ > app/test-pmd/icmpecho.c | 15 +++ > app/test-pmd/iofwd.c | 22 +++- > app/test-pmd/macfwd-retry.c | 167 ---------------------------- > app/test-pmd/macfwd.c | 13 +++ > app/test-pmd/macswap.c | 12 ++ > app/test-pmd/testpmd.c | 12 +- > app/test-pmd/testpmd.h | 11 +- > app/test-pmd/txonly.c | 12 ++ > doc/guides/testpmd_app_ug/run_app.rst | 1 - > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 18 +-- > 15 files changed, 299 insertions(+), 199 deletions(-) > delete mode 100644 app/test-pmd/macfwd-retry.c > > -- > 2.5.0 Series-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> ^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility 2016-06-15 10:04 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility De Lara Guarch, Pablo @ 2016-06-16 14:36 ` Thomas Monjalon 0 siblings, 0 replies; 54+ messages in thread From: Thomas Monjalon @ 2016-06-16 14:36 UTC (permalink / raw) To: Wang, Zhihong Cc: dev, De Lara Guarch, Pablo, Ananyev, Konstantin, Richardson, Bruce > > Zhihong Wang (5): > > testpmd: add retry option > > testpmd: configurable tx_first burst number > > testpmd: show throughput in port stats > > testpmd: handle all rxqs in rss setup > > testpmd: show topology at forwarding start > > Series-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com> Applied, thanks ^ permalink raw reply [flat|nested] 54+ messages in thread
end of thread, other threads:[~2016-06-28 11:10 UTC | newest] Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-05-05 22:46 [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 1/6] testpmd: add io_retry forwarding Zhihong Wang 2016-05-25 9:32 ` Thomas Monjalon 2016-05-26 2:40 ` Wang, Zhihong 2016-05-26 6:27 ` Thomas Monjalon 2016-05-26 9:24 ` Wang, Zhihong 2016-05-05 22:46 ` [dpdk-dev] [PATCH 2/6] testpmd: configurable tx_first burst number Zhihong Wang 2016-05-25 9:35 ` Thomas Monjalon 2016-05-26 2:53 ` Wang, Zhihong 2016-05-26 6:31 ` Thomas Monjalon 2016-05-26 9:31 ` Wang, Zhihong 2016-05-05 22:46 ` [dpdk-dev] [PATCH 3/6] testpmd: show throughput in port stats Zhihong Wang 2016-05-05 22:46 ` [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup Zhihong Wang 2016-05-25 9:42 ` Thomas Monjalon 2016-05-26 2:55 ` Wang, Zhihong 2016-06-03 9:22 ` Wang, Zhihong 2016-05-05 22:47 ` [dpdk-dev] [PATCH 5/6] testpmd: show topology at forwarding start Zhihong Wang 2016-05-25 9:45 ` Thomas Monjalon 2016-05-26 2:56 ` Wang, Zhihong 2016-05-05 22:47 ` [dpdk-dev] [PATCH 6/6] testpmd: update documentation Zhihong Wang 2016-05-25 9:48 ` Thomas Monjalon 2016-05-26 2:54 ` Wang, Zhihong 2016-05-20 8:54 ` [dpdk-dev] [PATCH 0/6] vhost/virtio performance loopback utility Wang, Zhihong 2016-05-25 9:27 ` Thomas Monjalon 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 0/5] " Zhihong Wang 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 1/5] testpmd: add retry option Zhihong Wang 2016-06-07 9:28 ` De Lara Guarch, Pablo 2016-06-08 1:29 ` Wang, Zhihong 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 2/5] testpmd: configurable tx_first burst number Zhihong Wang 2016-06-07 9:43 ` De Lara Guarch, Pablo 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 3/5] testpmd: show throughput in port stats Zhihong Wang 2016-06-07 10:02 ` De Lara Guarch, Pablo 2016-06-08 1:31 ` Wang, Zhihong 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang 2016-06-07 10:29 ` De Lara Guarch, Pablo 2016-06-08 1:28 ` Wang, Zhihong 2016-06-01 3:27 ` [dpdk-dev] [PATCH v2 5/5] testpmd: show topology at forwarding start Zhihong Wang 2016-06-07 10:56 ` De Lara Guarch, Pablo 2016-06-14 15:13 ` De Lara Guarch, Pablo 2016-06-15 7:05 ` Wang, Zhihong 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 1/5] testpmd: add retry option Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 2/5] testpmd: configurable tx_first burst number Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 3/5] testpmd: show throughput in port stats Zhihong Wang 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup Zhihong Wang 2016-06-27 14:23 ` Nélio Laranjeiro 2016-06-27 22:36 ` De Lara Guarch, Pablo 2016-06-28 8:34 ` Nélio Laranjeiro 2016-06-28 11:10 ` Wang, Zhihong 2016-06-14 23:08 ` [dpdk-dev] [PATCH v3 5/5] testpmd: show topology at forwarding start Zhihong Wang 2016-06-16 11:09 ` De Lara Guarch, Pablo 2016-06-16 13:33 ` Thomas Monjalon 2016-06-15 10:04 ` [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility De Lara Guarch, Pablo 2016-06-16 14:36 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).