DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andriy Berestovskyy <aber@semihalf.com>
To: 张伟 <zhangwqh@126.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Matthew Hall <mhall@mhcomputing.net>,
	nikita@gandi.net
Subject: Re: [dpdk-dev] lpm performance
Date: Tue, 20 Sep 2016 16:41:36 +0200	[thread overview]
Message-ID: <CAOysbxpopbPYyFuHmn9Oaz3vNsJQCQ1mwU3CZndO=LEA6n=EiQ@mail.gmail.com> (raw)
In-Reply-To: <356cd0e7.bdfb.15747357ddf.Coremail.zhangwqh@126.com>

AFAIR Intel hardware should do the 10Gbit/s line rate (i.e. ~14,8
MPPS) with one flow and LPM quite easily. Sorry, I don't have numbers
to share at hand.

Regarding the tool please see the pktgen-dpdk or TRex. Regarding the
number of flows and overall benchmarking methodology - please see
RFC2544.

Andriy


On Tue, Sep 20, 2016 at 12:47 PM, 张伟 <zhangwqh@126.com> wrote:
> Thanks so much for your reply!  Usually how did you test lpm performance
> with variety of destination addresses? use which tool send the traffic? how
> many flows rules will you add? what's the performance you get?
>
>
>
>
>
>
> At 2016-09-20 17:41:13, "Andriy Berestovskyy" <aber@semihalf.com> wrote:
>>Hey,
>>You are correct. The LPM might need just one (TBL24) or two memory
>>reads (TBL24 + TBL8). The performance also drops once you have a
>>variety of destination addresses instead of just one (cache misses).
>>
>>In your case for the dst IP 192.168.1.2 you will have two memory reads
>>(TBL24 + TBL8), because 192.168.1/24 block has the more specific route
>>192.168.1.1/32.
>>
>>Regards,
>>Andriy
>>
>>On Tue, Sep 20, 2016 at 12:18 AM, 张伟 <zhangwqh@126.com> wrote:
>>> Hi all,
>>>
>>>
>>> Does anyone test IPv4 performance? If so, what's the throughput? I can
>>> get almost 10Gb with 64 byte packets.  But before the test, I would expect
>>> it will be less than 10G.  I thought the performance will not be affected by
>>> the  number of rule entires. But the throughput will be related to whether
>>> the flow needs to check the second layer table : TBL8.  Is my understanding
>>> correct? I added this flow entries following this link:
>>> http://www.slideshare.net/garyachy/understanding-ddpd-algorithmics
>>> slide 10,
>>>
>>>
>>>
>>> struct ipv4_lpm_route ipv4_lpm_route_array[] = {
>>>
>>>         {IPv4(192, 168, 0, 0), 16, 0},
>>>
>>>         {IPv4(192, 168, 1, 0), 24, 1},
>>>
>>>         {IPv4(192, 168, 1, 1), 32, 2}
>>>
>>> };
>>>
>>> send the flow with dst IP:
>>>
>>> 192.168.1.2
>>>
>>> It should check the second layer table. But the performance is still 10G.
>>> Does any part go wrong with my setup? Or it really can achieve 10G with 64
>>> byte packet size.
>>>
>>> Thanks,
>>>
>>>
>>
>>
>>
>>--
>>Andriy Berestovskyy
>
>
>
>



-- 
Andriy Berestovskyy

  reply	other threads:[~2016-09-20 14:41 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-19 22:18 张伟
2016-09-20  9:41 ` Andriy Berestovskyy
2016-09-20 10:47   ` 张伟
2016-09-20 14:41     ` Andriy Berestovskyy [this message]
2016-09-21  2:42       ` 张伟

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOysbxpopbPYyFuHmn9Oaz3vNsJQCQ1mwU3CZndO=LEA6n=EiQ@mail.gmail.com' \
    --to=aber@semihalf.com \
    --cc=dev@dpdk.org \
    --cc=mhall@mhcomputing.net \
    --cc=nikita@gandi.net \
    --cc=zhangwqh@126.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).