DPDK usage discussions
 help / color / mirror / Atom feed
From: Vincent Li <vincent.mc.li@gmail.com>
To: Pavel Vajarov <freakpv@gmail.com>
Cc: Vincent Li <vincent.mc.li@gmail.com>, users <users@dpdk.org>
Subject: Re: [dpdk-users] Peformance troubleshouting of TCP/IP stack over DPDK.
Date: Wed, 27 May 2020 09:44:49 -0700 (PDT)	[thread overview]
Message-ID: <alpine.OSX.2.21.2005270938540.40131@sea-ml-00029224.olympus.f5net.com> (raw)
In-Reply-To: <CAK9EM19xw3Pjvr4TaJDKsoqXbn8TwFsondDgRPaXT0oJ_wvYDQ@mail.gmail.com>



On Wed, 27 May 2020, Pavel Vajarov wrote:

>       > Hi there,
>       >
>       > We are trying to compare the performance of DPDK+FreeBSD networking stack
>       > vs standard Linux kernel and we have problems finding out why the former is
>       > slower. The details are below.
>       >
>       > There is a project called F-Stack <https://github.com/F-Stack/f-stack>.
>       > It glues the networking stack from
>       > FreeBSD 11.01 over DPDK. We made a setup to test the performance of
>       > transparent
>       > TCP proxy based on F-Stack and another one running on Standard Linux
>       > kernel.
> 
>       I assume you wrote your own TCP proxy based on F-Stack library?
> 
> 
> Yes, I wrote transparent TCP proxy based on the F-Stack library for the tests.
> The thing is that we have our transparent caching proxy running on Linux and
> now we try to find a ways to improve its performance and hardware requirements.
>  
>       >
>       > Here are the test results:
>       > 1. The Linux based proxy was able to handle about 1.7-1.8 Gbps before it
>       > started to throttle the traffic. No visible CPU usage was observed on core
>       > 0 during the tests, only core 1, where the application and the IRQs were
>       > pinned, took the load.
>       > 2. The DPDK+FreeBSD proxy was able to thandle 700-800 Mbps before it
>       > started to throttle the traffic. No visible CPU usage was observed on core
>       > 0 during the tests only core 1, where the application was pinned, took the
>       > load. In some of the latter tests I did some changes to the number of read
>       > packets in one call from the network card and the number of handled events
>       > in one call to epoll. With these changes I was able to increase the
>       > throughput
>       > to 900-1000 Mbps but couldn't increase it more.
>       > 3. We did another test with the DPDK+FreeBSD proxy just to give us some
>       > more info about the problem. We disabled the TCP proxy functionality and
>       > let the packets be simply ip forwarded by the FreeBSD stack. In this test
>       > we reached up to 5Gbps without being able to throttle the traffic. We just
>       > don't have more traffic to redirect there at the moment. So the bottlneck
>       > seem to be either in the upper level of the network stack or in the
>       > application
>       > code.
>       >
> 
>       I once tested F-Stack ported Nginx and used Nginx TCP proxy, I could
>       achieve above 6Gbps with iperf. After seeing your email, I setup PCI
>       passthrough to KVM VM and ran F-Stack Nginx as webserver
>       with http load test, no proxy, I could  achieve about 6.5Gbps
> 
> Can I ask on how many cores you run the Nginx?

I used 4 cores on the VM
 
> The results from our tests are from single core. We are trying to reach 
> max performance on single core because we know that the F-stack soulution 
> has linear scalability. We tested in on 3 cores and got around 3 Gbps which
> is 3 times the result on single core.
> Also we test with traffic from one internet service provider. We just 
> redirect few ip pools to the test machine for the duration of the tests and see
> at which point the proxy will start choking the traffic and the switch the traffic back.

I used mTCP ported apache bench to do load test, since the F-Stack and the 
apache bench are directed connected machine with cable and running 
capture on mTCP and F-Stack would affect performance, I do not have 
capture to see if there are significant packet drops or not when achieving 
6.5Gbps 

> 
>       > There is a huawei switch which redirects the traffic to this server. It
>       > regularly
>       > sends arping and if the server doesn't respond it stops the redirection.
>       > So we assumed that when the redirection stops it's because the server
>       > throttles the traffic and drops packets and can't respond to the arping
>       > because
>       > of the packets drop.
> 
>       I did have some weird issue with ARPing of F-Stack, I manually added
>       static ARP for F-Stack interface for each F-Stack process, not sure if it
>       is related to your ARPing, see https://github.com/F-Stack/f-stack/issues/515
> 
> Hmm, I've missed that. Thanks a lot for it because it may help for the tests and
> for the next stage.
> 
>  
> 
> 

      reply	other threads:[~2020-05-27 16:44 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-06  5:14 Pavel Vajarov
2020-05-06 14:54 ` Stephen Hemminger
2020-05-07 10:47   ` Pavel Vajarov
2020-05-07 14:09     ` dave seddon
2020-05-07 20:31       ` Stephen Hemminger
2020-05-08  5:03         ` Pavel Vajarov
2020-05-20 19:43       ` Vincent Li
2020-05-21  8:09         ` Pavel Vajarov
2020-05-21 16:31           ` Vincent Li
2020-05-26 16:50 ` Vincent Li
2020-05-27  5:11   ` Pavel Vajarov
2020-05-27 16:44     ` Vincent Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.OSX.2.21.2005270938540.40131@sea-ml-00029224.olympus.f5net.com \
    --to=vincent.mc.li@gmail.com \
    --cc=freakpv@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).