From: Pierre Laurent <pierre.laurent@emutex.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] HOW performance to run DPDK at ARM64 arch?
Date: Thu, 27 Dec 2018 16:41:57 +0000 [thread overview]
Message-ID: <2736c618-8a28-a267-d05f-93021a3d5004@emutex.com> (raw)
In-Reply-To: <1397ab53.8240.167eefaa3c7.Coremail.win239@126.com>
Hi,
Regarding your question 2, the TX+Rx numbers you get look strangely like you are trying to use full duplex traffic on a PCIe x4
The real bandwidth needed by an interface is approximately ((pkt size + 48) * pps) .
48 bytes is the approximate little overhead , per packet, for NIC descriptors and PCIe overheads. This is an undocumented heuristic ......
I guess you are using the default DPDK options, the ethernet FCS is not in PCIe bandwidth (stripped by the NIC on rx, generated by the NIC on TX). Same for 20 bytes ethernet preamble.
If I assume you are using 60 bytes packets . ( 60 + 48 ) * (14 + 6) * 8 = approx 17 Gbps == more or less the bandwidth of a bidirectional x4 interface.
Tools like "lspci" and "dmidecode" will help you to investigate the real capabilities of the PCIe slots where your 82599 cards are plugged in.
The output of dmidecode looks like the following example, and x2, x4, x8, x16 indicate the number of lanes an interface will be able to use. The more lanes, the fastest.
System Slot Information
Designation: System Slot 1
Type: x8 PCI Express
Current Usage: Available
Length: Long
ID: 1
Characteristics:
3.3 V is provided
To use a 82599 at full bidirectional rate, you need at least a x8 interface (1 port) or x16 interface (2 ports)
Regards,
Pierre
On 27/12/2018 09:24, 金塔尖之鑫 wrote:
recently, I have debug DPDK18.08 at my arm64 arch machine, with DPDK-pktgen3.5.2.
but the performace is very low for bidirectional traffic with x86 machine
here is my data:
hardware Conditions:
arm64: CPU - 64 cores, CPUfreq: 1.5GHz
MEM - 64 GiB
NIC - 82599ES dual port
x86: CPU - 4 cores, CPUfreq: 3.2GHz
MEM - 4GiB
NIC - 82599ES dual port
software Conditions:
system kernel:
arm64: linux-4.4.58
x86: ubuntu16.04-4.4.0-generic
tools:
DPDK18.08, DPDK-pktgen3.5.2
test:
|----------| bi-directional |-----------|
| arm64 | port0 | < - > | port0 | x86 |
|----------| |----------|
result
arm64 x86
Pkts/s (Rx/Tx) 10.2/6.0Mpps 6.0/14.80Mpps
MBits/s(Rx/Tx) 7000/3300 MBits/s 3300/9989 MBits/s
Questions:
1、Why DPDK data performance would be so much worse than the x86 architecture in arm64 addition?
2、above, Tx direction is not run full, Why Rx and TX affect each other?
prev parent reply other threads:[~2018-12-27 16:42 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-27 9:24 金塔尖之鑫
2018-12-27 16:41 ` Pierre Laurent [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2736c618-8a28-a267-d05f-93021a3d5004@emutex.com \
--to=pierre.laurent@emutex.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).