DPDK usage discussions
 help / color / mirror / Atom feed
From: Royce Niu <royceniu@gmail.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Performance Problem of DPDK pkt-gen
Date: Tue, 8 Mar 2016 00:01:30 +0800	[thread overview]
Message-ID: <CAOwUCNsoYH4OeZW9K9iXWZWPq90Ht2T+OoON3-oiJb9_0rJ2Bw@mail.gmail.com> (raw)
In-Reply-To: <EFD91A87-5006-40C6-9BE1-3DCE2B6C3B84@intel.com>

Dear Keith,

I am doing the measurement works. The two PCs are same in software/physical
configuration with two 10Gb/s link.

The L2FWD actually is in a virtual machine in L2FWD pc. I don't mind packet
drops if L2FWD in VM on L2FWD do its best.

Is there an solution cancel rate limiting in linux/dpdk? So, I can know how
many packets in lost in 14.4Mpps environments.



On Mon, Mar 7, 2016 at 11:49 PM, Wiles, Keith <keith.wiles@intel.com> wrote:

> From: Royce Niu <royceniu@gmail.com>
> Date: Monday, March 7, 2016 at 9:41 AM
> To: Keith Wiles <keith.wiles@intel.com>
> Cc: Royce Niu <royceniu@gmail.com>, "users@dpdk.org" <users@dpdk.org>
> Subject: Re: [dpdk-users] Performance Problem of DPDK pkt-gen
>
> Yes.
>
> The problem is the sending rate is not 14Mpps when L2FWD is working.
>
> When L2FWD is working, the sending rate is about 4Mpps, instead of 14Mpps.
> When I close the L2FWD pc, the sending rate recovers to 14Mpps...
>
> I think there is something wrong with my Pktgen PC. Could you help me
> check my commands? Or is there anything wrong?
>
>
> I do not think Pktgen has a problem or the PC on which it runs. I expect
> the problem is the second PC is not able to keep up with the RX rate and is
> sending pause packets to the TX machine. The pause packets will reduce the
> TX rate on the Pktgen PC.
>
> Try running Pktgen on the L2FWD PC and see if the rate drops. If the rate
> does not drop then the second PC is sending pause frame back to the first
> PC to do rate limiting on the TX side.
>
>
>
>
> On Mon, Mar 7, 2016 at 11:35 PM, Wiles, Keith <keith.wiles@intel.com>
> wrote:
>
>> From: Royce Niu <royceniu@gmail.com>
>> Date: Monday, March 7, 2016 at 9:30 AM
>> To: Keith Wiles <keith.wiles@intel.com>
>> Cc: Royce Niu <royceniu@gmail.com>, "users@dpdk.org" <users@dpdk.org>
>> Subject: Re: [dpdk-users] Performance Problem of DPDK pkt-gen
>>
>> Hi, Keith
>>
>> Maybe, since I didn't configure the CPU affinity in the L2FWD pc on
>> purpose so far.
>>
>> But, my question is the first PC have a poor sending rate when L2FWD is
>> working in second PC.
>>
>> You mean the problem is related to L2FWD?
>>
>>
>> The sending rate of the Pktgen PC should be constant, but the forwarding
>> rate of the second PC maybe the problem because sending packets received on
>> one socket and then being send by another socket is a problem as the QPI
>> bus between sockets is not as fast.
>>
>>
>>
>> On Mon, Mar 7, 2016 at 11:26 PM, Wiles, Keith <keith.wiles@intel.com>
>> wrote:
>>
>>> >Dear all,
>>> >
>>> >I am using an server with 4 cpus (4 x 8 core CPUs with HT) and NICs
>>> (X520).
>>> >
>>> >When I use pkt-gen on NIC1 or NIC2, the speed of generating 64Byte is
>>> >14Mpps.
>>> >
>>> >If I generating both on NIC1 and NIC2, the speed of  generating 64Byte
>>> on
>>> >both are more than  13Mpps.
>>> >
>>> >However, I use same configuration PC (with DPDK L2FWD) to bridge NIC1
>>> and
>>> >NIC2, so I can generate packet on NIC1 and receive these packets on
>>> NIC2 in
>>> >pkt-gen. The speed of generating is decreased to 4Mpps and the receive
>>> rate
>>> >is 3Mpps.
>>>
>>> I am not sure how you configured the second PC for L2FWD, but I suspect
>>> the L2FWD is having to receive packets on socket 0 and send the packets on
>>> socket 1, this means the QPI bus gets involved here. Is this the case?
>>>
>>> >
>>> >
>>> >I want to know why generating speed is slower than the situation without
>>> >the bridge of NIC1 and NIC2?  how to solve these problem?
>>> >
>>> >The detailed information is as following.
>>> >
>>> >sudo sysctl vm.nr_hugepages=4096
>>> >echo 1024 | sudo tee
>>> >/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>>> >echo 1024 | sudo tee
>>> >/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
>>> >echo 1024 | sudo tee
>>> >/sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>>> >echo 1024 | sudo tee
>>> >/sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
>>> >
>>> >
>>> >sudo mkdir -p /dev/hugepages
>>> >sudo mount -t hugetlbfs nodev /dev/hugepages
>>> >
>>> >
>>> >sudo dpdk-2.2.0/tools/dpdk_nic_bind.py --status
>>> >sudo modprobe uio
>>> >sudo insmod dpdk-2.2.0/build/kmod/igb_uio.ko
>>> >
>>> >sudo dpdk-2.2.0/tools/dpdk_nic_bind.py -b igb_uio 04:00.0 04:00.1
>>> >sudo dpdk-2.2.0/tools/dpdk_nic_bind.py --status
>>> >
>>> >cd pktgen-2.9.12/
>>> >
>>> >sudo app/build/pktgen -c 0x1f -n 3 --proc-type auto --socket-mem
>>> >128,128,128,128 -- -P -m "[1:3].0, [2:4].1" -f test/set_seq.pkt
>>> >
>>> >I tried to change -m, but, sometime there is no packet generated by
>>> >pkt-gen.
>>> >
>>> >
>>> >The core map is :
>>> >
>>> >EAL: Detected lcore 0 as core 0 on socket 0
>>> >EAL: Detected lcore 1 as core 0 on socket 1
>>> >EAL: Detected lcore 2 as core 0 on socket 2
>>> >EAL: Detected lcore 3 as core 0 on socket 3
>>> >EAL: Detected lcore 4 as core 1 on socket 0
>>> >EAL: Detected lcore 5 as core 1 on socket 1
>>> >EAL: Detected lcore 6 as core 1 on socket 2
>>> >EAL: Detected lcore 7 as core 1 on socket 3
>>> >EAL: Detected lcore 8 as core 2 on socket 0
>>> >EAL: Detected lcore 9 as core 2 on socket 1
>>> >EAL: Detected lcore 10 as core 2 on socket 2
>>> >EAL: Detected lcore 11 as core 2 on socket 3
>>> >EAL: Detected lcore 12 as core 3 on socket 0
>>> >EAL: Detected lcore 13 as core 3 on socket 1
>>> >EAL: Detected lcore 14 as core 3 on socket 2
>>> >EAL: Detected lcore 15 as core 3 on socket 3
>>> >EAL: Detected lcore 16 as core 4 on socket 0
>>> >EAL: Detected lcore 17 as core 4 on socket 1
>>> >EAL: Detected lcore 18 as core 4 on socket 2
>>> >EAL: Detected lcore 19 as core 4 on socket 3
>>> >EAL: Detected lcore 20 as core 5 on socket 0
>>> >EAL: Detected lcore 21 as core 5 on socket 1
>>> >EAL: Detected lcore 22 as core 5 on socket 2
>>> >EAL: Detected lcore 23 as core 5 on socket 3
>>> >EAL: Detected lcore 24 as core 6 on socket 0
>>> >EAL: Detected lcore 25 as core 6 on socket 1
>>> >EAL: Detected lcore 26 as core 6 on socket 2
>>> >EAL: Detected lcore 27 as core 6 on socket 3
>>> >EAL: Detected lcore 28 as core 7 on socket 0
>>> >EAL: Detected lcore 29 as core 7 on socket 1
>>> >EAL: Detected lcore 30 as core 7 on socket 2
>>> >EAL: Detected lcore 31 as core 7 on socket 3
>>> >EAL: Detected lcore 32 as core 0 on socket 0
>>> >EAL: Detected lcore 33 as core 0 on socket 1
>>> >EAL: Detected lcore 34 as core 0 on socket 2
>>> >EAL: Detected lcore 35 as core 0 on socket 3
>>> >EAL: Detected lcore 36 as core 1 on socket 0
>>> >EAL: Detected lcore 37 as core 1 on socket 1
>>> >EAL: Detected lcore 38 as core 1 on socket 2
>>> >EAL: Detected lcore 39 as core 1 on socket 3
>>> >EAL: Detected lcore 40 as core 2 on socket 0
>>> >EAL: Detected lcore 41 as core 2 on socket 1
>>> >EAL: Detected lcore 42 as core 2 on socket 2
>>> >EAL: Detected lcore 43 as core 2 on socket 3
>>> >EAL: Detected lcore 44 as core 3 on socket 0
>>> >EAL: Detected lcore 45 as core 3 on socket 1
>>> >EAL: Detected lcore 46 as core 3 on socket 2
>>> >EAL: Detected lcore 47 as core 3 on socket 3
>>> >EAL: Detected lcore 48 as core 4 on socket 0
>>> >EAL: Detected lcore 49 as core 4 on socket 1
>>> >EAL: Detected lcore 50 as core 4 on socket 2
>>> >EAL: Detected lcore 51 as core 4 on socket 3
>>> >EAL: Detected lcore 52 as core 5 on socket 0
>>> >EAL: Detected lcore 53 as core 5 on socket 1
>>> >EAL: Detected lcore 54 as core 5 on socket 2
>>> >EAL: Detected lcore 55 as core 5 on socket 3
>>> >EAL: Detected lcore 56 as core 6 on socket 0
>>> >EAL: Detected lcore 57 as core 6 on socket 1
>>> >EAL: Detected lcore 58 as core 6 on socket 2
>>> >EAL: Detected lcore 59 as core 6 on socket 3
>>> >EAL: Detected lcore 60 as core 7 on socket 0
>>> >EAL: Detected lcore 61 as core 7 on socket 1
>>> >EAL: Detected lcore 62 as core 7 on socket 2
>>> >EAL: Detected lcore 63 as core 7 on socket 3
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >--
>>> >Regards,
>>> >
>>> >Royce Niu
>>> >
>>>
>>>
>>> Regards,
>>> Keith
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>>
>> Royce Niu
>>
>>
>>
>> Regards,
>> Keith
>>
>>
>
>
> --
> Regards,
>
> Royce Niu
>
>
>
> Regards,
> Keith
>
>


-- 
Regards,

Royce Niu

  reply	other threads:[~2016-03-07 16:01 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-07 13:26 Royce Niu
2016-03-07 15:26 ` Wiles, Keith
2016-03-07 15:30   ` Royce Niu
2016-03-07 15:35     ` Wiles, Keith
2016-03-07 15:41       ` Royce Niu
2016-03-07 15:49         ` Wiles, Keith
2016-03-07 16:01           ` Royce Niu [this message]
2016-03-07 16:25             ` Wiles, Keith
2016-03-07 16:35               ` Royce Niu
2016-03-07 16:41                 ` Wiles, Keith
2016-03-07 17:14                   ` Royce Niu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOwUCNsoYH4OeZW9K9iXWZWPq90Ht2T+OoON3-oiJb9_0rJ2Bw@mail.gmail.com \
    --to=royceniu@gmail.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).