DPDK usage discussions
 help / color / mirror / Atom feed
From: Daniel Bush <thunderwolf66102@yahoo.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Very slow upload speed with VPP
Date: Sat, 29 Aug 2020 05:36:46 +0000 (UTC)	[thread overview]
Message-ID: <2110377401.52484.1598679406563@mail.yahoo.com> (raw)
In-Reply-To: <2110377401.52484.1598679406563.ref@mail.yahoo.com>

Hello,
I am preparing a system in order to aggregate multiple broadband connections and thus have set up a box (currently running gentoo, though same results achieved running under ubuntu 18.04) to run my existing circuit through until the fiber circuit arrives.
I have been able to successfully configure the box creating a host interface that is bridged over to GigabitEthernet1/0/0 allowing access to the usual apps and kernel.  However, when I run speedtest-cli through a capable server my downstream performance is equivalent to what kernel networking would provide and my upstream is nowhere near what kernel networking can provide at 3.80/mbps up.  Kernel or any other computer on the network would provide 40 mbps up and the fastest desktop would provide 700+ mbps down.  VPP matches what the kernel would provide on my testing system @ 560 mbps up.
System is a Xeon E3-1220 system with 16 GB.
 ###############################  Testing Script -
#!/bin/sh
/usr/src/vpp/build-root/install-vpp-native/vpp/bin/vpp -c /etc/vpp/startup.conf &
sleep 60
ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add x.x.x.1/24 dev vpp1host
/usr/src/vpp/build-root/install-vpp-native/vpp/bin/vppctl set int state host vpp1out up
/usr/src/vpp/build-root/install-vpp-native/vpp/bin/vppctl set int ip address host-vpp1out x.x.x.2/24
/usr/src/vpp/build-root/install-vpp-native/vpp/bin/vppctl set int l2 bridge GigabitEthernet1/0/0 1
/usr/src/vpp/build-root/install-vpp-native/vpp/bin/vppctl set int l2 bridge host-vpp1out 1


 ############################### startup.conf
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}
api-trace {
  on
}

api-segment {
  gid vpp
}

socksvr {
  default
}

cpu {
        main-core 0
        corelist-workers 1-3
}


 dpdk {
         dev default {
        }
        iova-mode pa
        dev 0000:01:00.0
        dev 0000:01:00.1
        dev 0000:01:00.2
        dev 0000:01:00.3
 }


 ###############################  Interfaces (quad port e1000 pcie card)

GigabitEthernet1/0/0              1      up          9000/0/0/0     rx packets                871738
                                                                    rx bytes              1287684149
                                                                    tx packets                481717
                                                                    tx bytes                48185913
                                                                    drops                        274
                                                                    tx-error                      13
GigabitEthernet1/0/1              2     down         9000/0/0/0
GigabitEthernet1/0/2              3     down         9000/0/0/0
GigabitEthernet1/0/3              4     down         9000/0/0/0
host-vpp1out                      5      up          9000/0/0/0     rx packets                481773
                                                                    rx bytes                48200633
                                                                    tx packets                871040
                                                                    tx bytes              1287585187
                                                                    drops                         39
                                                                    ip6                            4
local0                            0     down          0/0/0/0

 ############################### vpp show pci
0000:01:00.0   0  8086:150e   5.0 GT/s x4  vfio-pci
0000:01:00.1   0  8086:150e   5.0 GT/s x4  vfio-pci
0000:01:00.2   0  8086:150e   5.0 GT/s x4  vfio-pci
0000:01:00.3   0  8086:150e   5.0 GT/s x4  vfio-pci
0000:03:00.0   0  8086:1533   2.5 GT/s x1  <NONE>
0000:04:00.0   0  8086:1533   2.5 GT/s x1  <NONE>

############################### vpp show physmem used-pages 20 reserved-pages 8192 default-page-size 2MB lookup-page-size 2MB
   arena 'buffers-numa-0' pages 20 subpage-size 2MB numa-node 0 shared fd 6


What suggestions are there to improve the speeds?



           reply	other threads:[~2020-08-29  5:39 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <2110377401.52484.1598679406563.ref@mail.yahoo.com>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2110377401.52484.1598679406563@mail.yahoo.com \
    --to=thunderwolf66102@yahoo.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).