From: "Wiles, Roger Keith" <keith.wiles@windriver.com>
To: jinho hwang <hwang.jinho@gmail.com>
Cc: dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] ways to generate 40Gbps with two NICs x two ports?
Date: Tue, 19 Nov 2013 16:52:51 +0000 [thread overview]
Message-ID: <655E021C-F35A-44B0-9902-D390FD6297B5@windriver.com> (raw)
In-Reply-To: <CAPQGAnFcvcsCQVq2R_mtf95r8n3dKEb-J1W03LzJRx2-s7XRQQ@mail.gmail.com>
Sorry I miss-typed the speed of my machine it is 2.4Ghz not 3.4GHz, but that should not change the problem here.
I am not sure how to determine your machine has a problem other then starting up one port at a time and see if the rate drops when you start up the forth port.
Keith Wiles, Principal Technologist for Networking member of the CTO office, Wind River
mobile 940.213.5533
[Powering 30 Years of Innovation]<http://www.windriver.com/announces/wr30/>
On Nov 19, 2013, at 10:42 AM, jinho hwang <hwang.jinho@gmail.com<mailto:hwang.jinho@gmail.com>> wrote:
On Tue, Nov 19, 2013 at 11:31 AM, Wiles, Roger Keith
<keith.wiles@windriver.com<mailto:keith.wiles@windriver.com>> wrote:
How do you have Pktgen configured in this case?
On my westmere dual socket 3.4Ghz machine I can send 20G on a single NIC
82599x two ports. My machine has a PCIe bug that does not allow me to send
on more then 3 ports at wire rate. I get close to 40G 64 byte packets, but
the forth port does is about 70% of wire rate because of the PCIe hardware
bottle neck problem.
Keith Wiles, Principal Technologist for Networking member of the CTO office,
Wind River
direct 972.434.4136 mobile 940.213.5533 fax 000.000.0000
On Nov 19, 2013, at 10:09 AM, jinho hwang <hwang.jinho@gmail.com<mailto:hwang.jinho@gmail.com>> wrote:
Hi All,
I have two NICs (82599) x two ports that are used as packet generators. I
want to generate full line-rate packets (40Gbps), but Pktgen-DPDK does not
seem to be able to do it when two port in a NIC are used simultaneously.
Does anyone know how to generate 40Gbps without replicating packets in the
switch?
Thank you,
Jinho
Hi Keith,
Thank you for the e-mail. I am not sure how I figure out whether my
PCIe also has any problems to prevent me from sending full line-rates.
I use Intel(R) Xeon(R) CPU E5649 @ 2.53GHz. It is hard for
me to figure out where is the bottleneck.
My configuration is:
sudo ./app/build/pktgen -c 1ff -n 3 $BLACK-LIST -- -p 0xf0 -P -m
"[1:2].0, [3:4].1, [5:6].2, [7:8].3" -f test/forward.lua
=== port to lcore mapping table (# lcores 9) ===
lcore: 0 1 2 3 4 5 6 7 8
port 0: D: T 1: 0 0: 1 0: 0 0: 0 0: 0 0: 0 0: 0 0: 0 = 1: 1
port 1: D: T 0: 0 0: 0 1: 0 0: 1 0: 0 0: 0 0: 0 0: 0 = 1: 1
port 2: D: T 0: 0 0: 0 0: 0 0: 0 1: 0 0: 1 0: 0 0: 0 = 1: 1
port 3: D: T 0: 0 0: 0 0: 0 0: 0 0: 0 0: 0 1: 0 0: 1 = 1: 1
Total : 0: 0 1: 0 0: 1 1: 0 0: 1 1: 0 0: 1 1: 0 0: 1
Display and Timer on lcore 0, rx:tx counts per port/lcore
Configuring 4 ports, MBUF Size 1984, MBUF Cache Size 128
Lcore:
1, type RX , rx_cnt 1, tx_cnt 0 private (nil), RX (pid:qid): (
0: 0) , TX (pid:qid):
2, type TX , rx_cnt 0, tx_cnt 1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 0: 0)
3, type RX , rx_cnt 1, tx_cnt 0 private (nil), RX (pid:qid): (
1: 0) , TX (pid:qid):
4, type TX , rx_cnt 0, tx_cnt 1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 1: 0)
5, type RX , rx_cnt 1, tx_cnt 0 private (nil), RX (pid:qid): (
2: 0) , TX (pid:qid):
6, type TX , rx_cnt 0, tx_cnt 1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 2: 0)
7, type RX , rx_cnt 1, tx_cnt 0 private (nil), RX (pid:qid): (
3: 0) , TX (pid:qid):
8, type TX , rx_cnt 0, tx_cnt 1 private (nil), RX (pid:qid): ,
TX (pid:qid): ( 3: 0)
Port :
0, nb_lcores 2, private 0x6fd5a0, lcores: 1 2
1, nb_lcores 2, private 0x700208, lcores: 3 4
2, nb_lcores 2, private 0x702e70, lcores: 5 6
3, nb_lcores 2, private 0x705ad8, lcores: 7 8
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 90:e2:ba:2f:f2:a4
Create: Default RX 0:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Default TX 0:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Range TX 0:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Sequence TX 0:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Special TX 0:0 - Memory used (MBUFs 64 x (size 1984 +
Hdr 64)) + 395392 = 515 KB
Port memory used = 10251 KB
Initialize Port 1 -- TxQ 1, RxQ 1, Src MAC 90:e2:ba:2f:f2:a5
Create: Default RX 1:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Default TX 1:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Range TX 1:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Sequence TX 1:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Special TX 1:0 - Memory used (MBUFs 64 x (size 1984 +
Hdr 64)) + 395392 = 515 KB
Port memory used = 10251 KB
Initialize Port 2 -- TxQ 1, RxQ 1, Src MAC 90:e2:ba:4a:e6:1c
Create: Default RX 2:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Default TX 2:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Range TX 2:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Sequence TX 2:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Special TX 2:0 - Memory used (MBUFs 64 x (size 1984 +
Hdr 64)) + 395392 = 515 KB
Port memory used = 10251 KB
Initialize Port 3 -- TxQ 1, RxQ 1, Src MAC 90:e2:ba:4a:e6:1d
Create: Default RX 3:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Default TX 3:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Range TX 3:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Sequence TX 3:0 - Memory used (MBUFs 1024 x (size 1984 +
Hdr 64)) + 395392 = 2435 KB
Create: Special TX 3:0 - Memory used (MBUFs 64 x (size 1984 +
Hdr 64)) + 395392 = 515 KB
Port memory used = 10251 KB
Total memory used = 41003 KB
Port 0: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
Port 1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
Port 2: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
Port 3: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
=== Display processing on lcore 0
=== RX processing on lcore 1, rxcnt 1, port/qid, 0/0
=== TX processing on lcore 2, txcnt 1, port/qid, 0/0
=== RX processing on lcore 3, rxcnt 1, port/qid, 1/0
=== TX processing on lcore 4, txcnt 1, port/qid, 1/0
=== RX processing on lcore 5, rxcnt 1, port/qid, 2/0
=== TX processing on lcore 6, txcnt 1, port/qid, 2/0
=== RX processing on lcore 7, rxcnt 1, port/qid, 3/0
=== TX processing on lcore 8, txcnt 1, port/qid, 3/0
Please, advise me if you have time.
Thank you always for your help!
Jinho
next prev parent reply other threads:[~2013-11-19 16:51 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-19 16:09 jinho hwang
2013-11-19 16:31 ` Wiles, Roger Keith
2013-11-19 16:42 ` jinho hwang
2013-11-19 16:52 ` Wiles, Roger Keith [this message]
2013-11-19 16:54 ` Wiles, Roger Keith
2013-11-19 17:04 ` jinho hwang
2013-11-19 17:24 ` Wiles, Roger Keith
2013-11-19 17:35 ` jinho hwang
2013-11-19 21:18 ` Wiles, Roger Keith
2013-11-19 21:33 ` jinho hwang
2013-11-19 21:38 ` Wiles, Roger Keith
2013-11-20 20:58 ` jinho hwang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=655E021C-F35A-44B0-9902-D390FD6297B5@windriver.com \
--to=keith.wiles@windriver.com \
--cc=dev@dpdk.org \
--cc=hwang.jinho@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).