From: James Huang <jamsphon@gmail.com>
To: Victor Huertas <vhuertas@gmail.com>
Cc: dev <dev@dpdk.org>, cristian.dumitrescu@intel.com
Subject: Re: [dpdk-dev] Fwd: high latency detected in IP pipeline example
Date: Mon, 17 Feb 2020 15:10:35 -0800 [thread overview]
Message-ID: <CAFpuyR5MyP97KkdPu8CET57-GgCH2AtNkWnpmJ_Fujnp0XOG1w@mail.gmail.com> (raw)
In-Reply-To: <CAGxG5chUzK+TeLY+FGJAhwGPxE0dnV7+Cby7Q_KJjwtMOmNdLg@mail.gmail.com>
Yes, I experienced similar issue in my application. In a short answer, set
the swqs write burst value to 1 may reduce the latency significantly. The
default write burst value is 32.
On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas <vhuertas@gmail.com> wrote:
> Hi all,
>
> I am developing my own DPDK application basing it in the dpdk-stable
> ip_pipeline example.
> At this moment I am using the 17.11 LTS version of DPDK and I amb observing
> some extrange behaviour. Maybe it is an old issue that can be solved
> quickly so I would appreciate it if some expert can shade a light on this.
>
> The ip_pipeline example allows you to develop Pipelines that perform
> specific packet processing functions (ROUTING, FLOW_CLASSIFYING, etc...).
> The thing is that I am extending some of this pipelines with my own.
> However I want to take advantage of the built-in ip_pipeline capability of
> arbitrarily assigning the logical core where the pipeline (f_run()
> function) must be executed so that i can adapt the packet processing power
> to the amount of the number of cores available.
> Taking this into account I have observed something strange. I show you this
> simple example below.
>
> Case 1:
> [PIPELINE 0 MASTER core =0]
> [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=2] -----SWQ2---->
> [PIPELINE 3 core=3]
>
> Case 2:
> [PIPELINE 0 MASTER core =0]
> [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=1] -----SWQ2---->
> [PIPELINE 3 core=1]
>
> I send a ping between two hosts connected at both sides of the pipeline
> model which allows these pings to cross all the pipelines (from 1 to 3).
> What I observe in Case 1 (each pipeline has its own thread in different
> core) is that the reported RTT is less than 1 ms, whereas in Case 2 (all
> pipelines except MASTER are run in the same thread) is 20 ms. Furthermore,
> in Case 2, if I increase a lot (hundreds of Mbps) the packet rate this RTT
> decreases to 3 or 4 ms.
>
> Has somebody observed this behaviour in the past? Can it be solved somehow?
>
> Thanks a lot for your attention
> --
> Victor
>
>
> --
> Victor
>
next prev parent reply other threads:[~2020-02-17 23:10 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CAGxG5cjY+npJ7wVqcb9MXdtKkpC6RrgYpDQA2qbaAjD7i7C2EQ@mail.gmail.com>
2020-02-17 16:41 ` Victor Huertas
2020-02-17 23:10 ` James Huang [this message]
2020-02-18 7:04 ` Victor Huertas
2020-02-18 7:18 ` James Huang
2020-02-18 9:49 ` Victor Huertas
2020-02-18 22:08 ` James Huang
2020-02-19 8:29 ` Victor Huertas
2020-02-19 10:37 ` [dpdk-dev] Fwd: " Victor Huertas
2020-02-19 10:53 ` Olivier Matz
2020-02-19 12:05 ` Victor Huertas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFpuyR5MyP97KkdPu8CET57-GgCH2AtNkWnpmJ_Fujnp0XOG1w@mail.gmail.com \
--to=jamsphon@gmail.com \
--cc=cristian.dumitrescu@intel.com \
--cc=dev@dpdk.org \
--cc=vhuertas@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).