From: =?gb18030?B?Qm9iIENoZW4=?= <beef9999@qq.com>
To: =?gb18030?B?WmFjaGFyeQ==?= <zachary.jen@cas-well.com>,
=?gb18030?B?ZGV2?= <dev@dpdk.org>
Subject: [dpdk-dev] =?gb18030?b?u9i4tKO6IERQREsgJiBRUEkgcGVyZm9ybWFuY2Ug?= =?gb18030?q?issue_in_Romley_platform=2E?=
Date: Wed, 4 Sep 2013 00:19:28 +0800 [thread overview]
Message-ID: <tencent_1ECFEB4132D926E26B774824@qq.com> (raw)
In-Reply-To: <52240466.7050907@cas-well.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb18030", Size: 2592 bytes --]
QPI bandwidth is definitely large enough, but it seems that QPI is only responsible for the communication between separate CPU chips. What you need to do is actually accessing the memory on the other part, probably not even hit the bandwidth. The latency can be caused by a lot of facts during a NUMA operation.
/Bob
------------------ ÔʼÓʼþ ------------------
·¢¼þÈË: "Zachary";<zachary.jen@cas-well.com>;
·¢ËÍʱ¼ä: 2013Äê9ÔÂ2ÈÕ(ÐÇÆÚÒ») ÖÐÎç11:22
ÊÕ¼þÈË: "dev"<dev@dpdk.org>;
³ËÍ: " "Yannic.Chou (ÖÜÕÜÕý) : 6808" <yannic.chou@cas-well.com>; "Alan Yu ÓáÒॠ: 6632""<Alan.Yu@cas-well.com>;
Ö÷Ìâ: [dpdk-dev] DPDK & QPI performance issue in Romley platform.
Hi~
I have a question about DPDK & QPI performance issue in Romley platform.
Recently, I use DPDK example, l2fwd, to test DPDK's performance in my Romley platform.
When I try to do the test, crossing used CPU, I find the performance dramatically decrease.
Is it true? Or any method can prove the phenomenon?
In my opinion, there should be no this kind of issue here due to QPI have enough bandwidth to deal the kinds of case.
Thus, I am so amaze in our results and can not explain it.
Could someone can help me to solve this problem.
Thank a lot!
My testing environment describe as below:
Platform: Romley
CPU: E5-2643 * 2
RAM: Transcend 8GB PC3-1600 DDR3 * 8
OS: Fedora core 14
DPDK: v1.3.1r2, example/l2fwd
Slot setting:
SlotA is controled by CPU1 directly.
SlotB is controled by CPU0 directly.
DPDK pre-setting:
a. BIOS setting:
HT=disable
b. Kernel paramaters
isolcpus=2,3,6,7
default_hugepagesz=1024M
hugepagesz=1024M
hugepages=16
c. OS setting:
service avahi-daemon stop
service NetworkManager stop
service iptables stop
service acpid stop
selinux disable
Example program Command:
a. SlotB(CPU0) -> CPU1
#>./l2fwd -c 0xc -n 4 -- -q 1 -p 0xc
b. SlotA(CPU1) -> CPU0
#>./l2fwd -c 0xc0 -n 4 -- -q 1 -p 0xc0
Results:
use frame size 128 bytes
CPU Affinity
Slot A (CPU1)
Slot B (CPU0)
CPU0
15.9%
96.49%
CPU1
90.88%
24.78%
±¾Ðżþ¿ÉÄÜ°üº¬Èðì÷ëͨCÃÜÙYÓ£¬·ÇÖ¸¶¨Ö®ÊÕ¼þÕߣ¬ÕÎðʹÓûò½Ò¶±¾ÐżþÈÈÝ£¬KÕäN§´ËÐżþ¡£ This email may contain confidential information. Please do not use or disclose it in any way and delete it if you are not the intended recipient.
[-- Attachment #2: Type: text/html, Size: 7920 bytes --]
prev parent reply other threads:[~2013-09-03 16:19 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-02 3:22 [dpdk-dev] DPDK & QPI performance issue in Romley platform Zachary
2013-09-02 16:10 ` Stephen Hemminger
2013-09-03 16:19 ` =?gb18030?B?Qm9iIENoZW4=?= [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tencent_1ECFEB4132D926E26B774824@qq.com \
--to=beef9999@qq.com \
--cc=dev@dpdk.org \
--cc=zachary.jen@cas-well.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).