DPDK patches and discussions
 help / color / mirror / Atom feed
From: 王志宏 <wangzhihong.wzh@bytedance.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>
Cc: dev@dpdk.org, xiaoyun.li@intel.com,  "Singh,
	Aman Deep" <aman.deep.singh@intel.com>,
	Igor Russkikh <irusskikh@marvell.com>,
	 Cyril Chemparathy <cchemparathy@tilera.com>
Subject: Re: [dpdk-dev] [External] Re: [PATCH v2] app/testpmd: flowgen support ip and udp fields
Date: Thu, 12 Aug 2021 17:32:52 +0800	[thread overview]
Message-ID: <CAMne5nCSG3_yP0h2RSUbOqwqVs2+PFZZZGOWrPtNqvd4+vU_NQ@mail.gmail.com> (raw)
In-Reply-To: <e22b931e-b339-f20a-31f0-416afc6e8f48@intel.com>

On Wed, Aug 11, 2021 at 6:31 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 8/11/2021 3:48 AM, 王志宏 wrote:
> > On Tue, Aug 10, 2021 at 5:12 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >>
> >> On 8/10/2021 8:57 AM, 王志宏 wrote:
> >>> Thanks for the review Ferruh :)
> >>>
> >>> On Mon, Aug 9, 2021 at 11:18 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >>>>
> >>>> On 8/9/2021 7:52 AM, Zhihong Wang wrote:
> >>>>> This patch aims to:
> >>>>>  1. Add flexibility by supporting IP & UDP src/dst fields
> >>>>
> >>>> What is the reason/"use case" of this flexibility?
> >>>
> >>> The purpose is to emulate pkt generator behaviors.
> >>>
> >>
> >> 'flowgen' forwarding is already to emulate pkt generator, but it was only
> >> changing destination IP.
> >>
> >> What additional benefit does changing udp ports of the packets brings? What is
> >> your usecase for this change?
> >
> > Pkt generators like pktgen/trex/ixia/spirent can change various fields
> > including ip/udp src/dst.
> >
>
> But testpmd is not packet generator, it has very simple 'flowgen' forwarding
> engine, I would like to understand motivation to make it more complex.

I agree this *simplicity* point. In fact my sole intention is to make
flowgen useable for multi-core test. I'll keep the original setup in
the next patch.

>
> > Keeping the cfg_n_* while setting cfg_n_ip_dst = 1024 and others = 1
> > makes the default behavior exactly unchanged. Do you think it makes
> > sense?
> >
> >>
> >>>>
> >>>>>  2. Improve multi-core performance by using per-core vars>
> >>>>
> >>>> On multi core this also has syncronization problem, OK to make it per-core. Do
> >>>> you have any observed performance difference, if so how much is it?
> >>>
> >>> Huge difference, one example: 8 core flowgen -> rxonly results: 43
> >>> Mpps (per-core) vs. 9.3 Mpps (shared), of course the numbers "varies
> >>> depending on system configuration".
> >>>
> >>
> >> Thanks for clarification.
> >>
> >>>>
> >>>> And can you please separate this to its own patch? This can be before ip/udp update.
> >>>
> >>> Will do.
> >>>
> >>>>
> >>>>> v2: fix assigning ip header cksum
> >>>>>
> >>>>
> >>>> +1 to update, can you please make it as seperate patch?
> >>>
> >>> Sure.
> >>>
> >>>>
> >>>> So overall this can be a patchset with 4 patches:
> >>>> 1- Fix retry logic (nb_rx -> nb_pkt)
> >>>> 2- Use 'rte_ipv4_cksum()' API (instead of static 'ip_sum()')
> >>>> 3- User per-core varible (for 'next_flow')
> >>>> 4- Support ip/udp src/dst variaty of packets
> >>>>
> >>>
> >>> Great summary. Thanks a lot.
> >>>
> >>>>> Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
> >>>>> ---
> >>>>>  app/test-pmd/flowgen.c | 137 +++++++++++++++++++++++++++++++------------------
> >>>>>  1 file changed, 86 insertions(+), 51 deletions(-)
> >>>>>
> >>>>
> >>>> <...>
> >>>>
> >>>>> @@ -185,30 +193,57 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> >>>>>               }
> >>>>>               pkts_burst[nb_pkt] = pkt;
> >>>>>
> >>>>> -             next_flow = (next_flow + 1) % cfg_n_flows;
> >>>>> +             if (++next_udp_dst < cfg_n_udp_dst)
> >>>>> +                     continue;
> >>>>> +             next_udp_dst = 0;
> >>>>> +             if (++next_udp_src < cfg_n_udp_src)
> >>>>> +                     continue;
> >>>>> +             next_udp_src = 0;
> >>>>> +             if (++next_ip_dst < cfg_n_ip_dst)
> >>>>> +                     continue;
> >>>>> +             next_ip_dst = 0;
> >>>>> +             if (++next_ip_src < cfg_n_ip_src)
> >>>>> +                     continue;
> >>>>> +             next_ip_src = 0;
> >>>>
> >>>> What is the logic here, can you please clarifiy the packet generation logic both
> >>>> in a comment here and in the commit log?
> >>>
> >>> It's round-robin field by field. Will add the comments.
> >>>
> >>
> >> Thanks. If the receiving end is doing RSS based on IP address, dst address will
> >> change in every 100 packets and src will change in every 10000 packets. This is
> >> a slight behavior change.
> >>
> >> When it was only dst ip, it was simple to just increment it, not sure about it
> >> in this case. I wonder if we should set all randomly for each packet. I don't
> >> know what is the better logic here, we can discuss it more in the next version.
> >
> > A more sophisticated pkt generator provides various options among
> > "step-by-step" / "random" / etc.
> >
> > But supporting multiple fields naturally brings this implicitly. It
> > won't be a problem as it can be configured by setting the cfg_n_* as
> > we discussed above.
> >
> > I think rte_rand() is a good option, anyway this can be tweaked easily
> > once the framework becomes shaped.
> >
>
> Can be done, but do we really want to add more packet generator capability to
> testpmd?
>
> >>
> >>>>
> >>>>>       }
> >>>>>
> >>>>>       nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt);
> >>>>>       /*
> >>>>>        * Retry if necessary
> >>>>>        */
> >>>>> -     if (unlikely(nb_tx < nb_rx) && fs->retry_enabled) {
> >>>>> +     if (unlikely(nb_tx < nb_pkt) && fs->retry_enabled) {
> >>>>>               retry = 0;
> >>>>> -             while (nb_tx < nb_rx && retry++ < burst_tx_retry_num) {
> >>>>> +             while (nb_tx < nb_pkt && retry++ < burst_tx_retry_num) {
> >>>>>                       rte_delay_us(burst_tx_delay_time);
> >>>>>                       nb_tx += rte_eth_tx_burst(fs->tx_port, fs->tx_queue,
> >>>>> -                                     &pkts_burst[nb_tx], nb_rx - nb_tx);
> >>>>> +                                     &pkts_burst[nb_tx], nb_pkt - nb_tx);
> >>>>>               }
> >>>>
> >>>> +1 to this fix, thanks for it. But can you please make a seperate patch for
> >>>> this, with proper 'Fixes:' tag etc..
> >>>
> >>> Ok.
> >>>
> >>>>
> >>>>>       }
> >>>>> -     fs->tx_packets += nb_tx;
> >>>>>
> >>>>>       inc_tx_burst_stats(fs, nb_tx);
> >>>>> -     if (unlikely(nb_tx < nb_pkt)) {
> >>>>> -             /* Back out the flow counter. */
> >>>>> -             next_flow -= (nb_pkt - nb_tx);
> >>>>> -             while (next_flow < 0)
> >>>>> -                     next_flow += cfg_n_flows;
> >>>>> +     fs->tx_packets += nb_tx;
> >>>>> +     /* Catch up flow idx by actual sent. */
> >>>>> +     for (i = 0; i < nb_tx; ++i) {
> >>>>> +             RTE_PER_LCORE(_next_udp_dst) = RTE_PER_LCORE(_next_udp_dst) + 1;
> >>>>> +             if (RTE_PER_LCORE(_next_udp_dst) < cfg_n_udp_dst)
> >>>>> +                     continue;
> >>>>> +             RTE_PER_LCORE(_next_udp_dst) = 0;
> >>>>> +             RTE_PER_LCORE(_next_udp_src) = RTE_PER_LCORE(_next_udp_src) + 1;
> >>>>> +             if (RTE_PER_LCORE(_next_udp_src) < cfg_n_udp_src)
> >>>>> +                     continue;
> >>>>> +             RTE_PER_LCORE(_next_udp_src) = 0;
> >>>>> +             RTE_PER_LCORE(_next_ip_dst) = RTE_PER_LCORE(_next_ip_dst) + 1;
> >>>>> +             if (RTE_PER_LCORE(_next_ip_dst) < cfg_n_ip_dst)
> >>>>> +                     continue;
> >>>>> +             RTE_PER_LCORE(_next_ip_dst) = 0;
> >>>>> +             RTE_PER_LCORE(_next_ip_src) = RTE_PER_LCORE(_next_ip_src) + 1;
> >>>>> +             if (RTE_PER_LCORE(_next_ip_src) < cfg_n_ip_src)
> >>>>> +                     continue;
> >>>>> +             RTE_PER_LCORE(_next_ip_src) = 0;
> >>>>> +     }
> >>>>
> >>>> Why per-core variables are not used in forward function, but local variables
> >>>> (like 'next_ip_src' etc..) used? Is it for the performance, if so what is the
> >>>> impact?
> >>>>
> >>>> And why not directly assign from local variables to per-core variables, but have
> >>>> above catch up loop?
> >>>>
> >>>>
> >>>
> >>> Local vars are for generating pkts, global ones catch up finally when
> >>> nb_tx is clear.
> >>
> >> Why you are not using global ones to generate packets? This removes the need for
> >> catch up?
> >
> > When there are multiple fields, back out the overran index caused by
> > dropped packets is not that straightforward -- It's the "carry" issue
> > in adding.
> >
> >>
> >>> So flow indexes only increase by actual sent pkt number.
> >>> It serves the same purpose of the original "/* backout the flow counter */".
> >>> My math isn't good enough to make it look more intelligent though.
> >>>
> >>
> >> Maybe I am missing something, for this case why not just assign back from locals
> >> to globals?
> >
> > As above.
> >
> > However, this can be simplified if we discard the "back out"
> > mechanism: generate 32 pkts and send 20 of them while the rest 12 are
> > dropped, the difference is that is the idx gonna start from 21 or 33
> > next time?
> >
>
> I am not sure point of "back out", I think we can remove it unless there is no
> objection, so receiving end can recognize failed packets.
>

  reply	other threads:[~2021-08-12  9:33 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-09  6:25 [dpdk-dev] [PATCH] " Zhihong Wang
2021-08-09  6:52 ` [dpdk-dev] [PATCH v2] " Zhihong Wang
2021-08-09 12:21   ` Singh, Aman Deep
2021-08-10  7:30     ` [dpdk-dev] [External] " 王志宏
2021-08-09 15:18   ` [dpdk-dev] " Ferruh Yigit
2021-08-10  7:57     ` [dpdk-dev] [External] " 王志宏
2021-08-10  9:12       ` Ferruh Yigit
2021-08-11  2:48         ` 王志宏
2021-08-11 10:31           ` Ferruh Yigit
2021-08-12  9:32             ` 王志宏 [this message]
2021-08-12 11:11 ` [dpdk-dev] [PATCH v3 0/4] app/testpmd: flowgen fixes and improvements Zhihong Wang
2021-08-12 11:11   ` [dpdk-dev] [PATCH v3 1/4] app/testpmd: fix tx retry in flowgen Zhihong Wang
2021-08-12 11:11   ` [dpdk-dev] [PATCH v3 2/4] app/testpmd: use rte_ipv4_cksum " Zhihong Wang
2021-08-12 11:11   ` [dpdk-dev] [PATCH v3 3/4] app/testpmd: record rx_burst and fwd_dropped " Zhihong Wang
2021-08-12 11:11   ` [dpdk-dev] [PATCH v3 4/4] app/testpmd: use per-core variable " Zhihong Wang
2021-08-12 13:18 ` [dpdk-dev] [PATCH v4 0/4] app/testpmd: flowgen fixes and improvements Zhihong Wang
2021-08-12 13:18   ` [dpdk-dev] [PATCH v4 1/4] app/testpmd: fix tx retry in flowgen Zhihong Wang
2021-08-13  1:33     ` Li, Xiaoyun
2021-08-13  2:27       ` [dpdk-dev] [External] " 王志宏
2021-08-12 13:18   ` [dpdk-dev] [PATCH v4 2/4] app/testpmd: use rte_ipv4_cksum " Zhihong Wang
2021-08-13  1:37     ` Li, Xiaoyun
2021-08-12 13:19   ` [dpdk-dev] [PATCH v4 3/4] app/testpmd: record rx_burst and fwd_dropped " Zhihong Wang
2021-08-13  1:44     ` Li, Xiaoyun
2021-08-12 13:19   ` [dpdk-dev] [PATCH v4 4/4] app/testpmd: use per-core variable " Zhihong Wang
2021-08-13  1:56     ` Li, Xiaoyun
2021-08-13  2:35       ` [dpdk-dev] [External] " 王志宏
2021-08-13  8:05 ` [dpdk-dev] [PATCH v5 0/4] app/testpmd: flowgen fixes and improvements Zhihong Wang
2021-08-13  8:05   ` [dpdk-dev] [PATCH v5 1/4] app/testpmd: fix tx retry in flowgen Zhihong Wang
2021-08-13  8:05   ` [dpdk-dev] [PATCH v5 2/4] app/testpmd: use rte_ipv4_cksum " Zhihong Wang
2021-08-13  8:05   ` [dpdk-dev] [PATCH v5 3/4] app/testpmd: record rx_burst and fwd_dropped " Zhihong Wang
2021-08-13  8:05   ` [dpdk-dev] [PATCH v5 4/4] app/testpmd: use per-core variable " Zhihong Wang
2021-08-24 17:21   ` [dpdk-dev] [PATCH v5 0/4] app/testpmd: flowgen fixes and improvements Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMne5nCSG3_yP0h2RSUbOqwqVs2+PFZZZGOWrPtNqvd4+vU_NQ@mail.gmail.com \
    --to=wangzhihong.wzh@bytedance.com \
    --cc=aman.deep.singh@intel.com \
    --cc=cchemparathy@tilera.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=irusskikh@marvell.com \
    --cc=xiaoyun.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).