DPDK usage discussions
 help / color / mirror / Atom feed
From: Zhihan Jiang <jackcharm233@gmail.com>
To: Kyle Larose <eomereadig@gmail.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] DPDK weird forwarding loss
Date: Wed, 2 Dec 2015 16:50:54 +0900	[thread overview]
Message-ID: <CAFC-b777C83kCmmr9Jzeiqt-ViRNZRHRYHX6PBc_drcuTQw6hQ@mail.gmail.com> (raw)
In-Reply-To: <CAMFWN9kX2K0wsuqJf2w7fJEC_QZNVG7cfiUrpKr_HLwvG8GcQw@mail.gmail.com>

Hey Kyle,

Thank you for the help. I have tried to add isolcpus=20-27 (which are the 8
cores i pinned to the VM in host) into the /etc/grub.conf (and I also tried
to add isolcpus into guest machine VM) but seems like it doesn't work. I
also tried to disable SMT (hyper-threading) but that just made the result
worse.

I did some performance test and I will list here just in case. To clarify,
TX loss means the packet that dropped before reaching DPDK, FWD loss means
the packet that dropped by DPDK l3fwd app.

64byte packet

MPPS           Loss Rate        TX Loss        FWD Loss
14.88             6.810%              0.018%         6.791%
14.00             1.029%              0.003%         1.000%
13.00             0.273%              0.003%         0.270%
12.00             0.211%              0.211%         0.000%
11.00             0.216%              0.216%         0.000%
10.00             0.240%              0.240%         0.000%
09.00             0.250%              0.250%         0.000%
08.00             0.248%              0.248%         0.000%

The trend can pretty much be told by this stats but it really bugs me why
TX Loss is decreasing while PPS increases. I was wondering if it's bug with
IXIA but I tested it today the packet it sent is OK.

Thank for any help.
Jack



On Wed, Dec 2, 2015 at 2:55 AM, Kyle Larose <eomereadig@gmail.com> wrote:

> Hi Jack,
>
> On Mon, Nov 30, 2015 at 8:42 PM, Zhihan Jiang <jackcharm233@gmail.com>
> wrote:
> > Hello,
> >
>
> > Other settings: intel_iommu=on iommu=pt / blacklist ixgbevf on the host
> > machine / pause frame off / pin all the ports to the same NUMA node &
> > socket / VM uses CPU on the same NUMA node & socket.
> >
>
> > but there is always ~0.5%- 1% packet loss
> >
> > The command line for l3fwd is:
> > ./build/l3fwd -c6 -n4 -w [whitelist devices] --socket-mem=1024,0 -- -p3
> > --config "(0,0,1), (1,0,2)"
> >
>
>
> Try isolating the CPUs on your guest and host so that the forwarding
> application cannot be preempted by something else.
>
> See the 'isolcpus' kernel boot parameter.
>
>
> http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re46.html
>
> In my experience, drops that happen at low rates are caused by the
> polling thread not being scheduled to receive packets often enough.
> This can either be because it was preempted, which isolcpus will fix,
> or because it is sleeping. IIRC l3fwd doesn't sleep, which leaves the
> first case the only possibility.
>
> Hope that helps,
>
> Kyle
>

      reply	other threads:[~2015-12-02  7:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-01  1:42 Zhihan Jiang
2015-12-01 17:55 ` Kyle Larose
2015-12-02  7:50   ` Zhihan Jiang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFC-b777C83kCmmr9Jzeiqt-ViRNZRHRYHX6PBc_drcuTQw6hQ@mail.gmail.com \
    --to=jackcharm233@gmail.com \
    --cc=eomereadig@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).