DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Maximum possible throughput with the KNI DPDK Application
@ 2014-09-17 22:50 Malveeka Tewari
  2014-09-18  1:01 ` Zhang, Helin
  0 siblings, 1 reply; 5+ messages in thread
From: Malveeka Tewari @ 2014-09-17 22:50 UTC (permalink / raw)
  To: dev

Hi all

I've been playing the with DPDK API to send out packets using the l2fwd app
and the Kernel Network Interface app with a single Intel 82599 NIC on an
Intel Xeon E5-2630

With the l2fwd application, I've been able to achieve 14.88 Mpps with
minimum sized packets.
However, running iperf with the KNI application only gives  me only ~7Gb/s
peak throughput.

Has anyone achieved the 10Gb/s line rate with the KNI application?
Any help would be greatly appreciated!

Thanks!
Malveeka

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application
  2014-09-17 22:50 [dpdk-dev] Maximum possible throughput with the KNI DPDK Application Malveeka Tewari
@ 2014-09-18  1:01 ` Zhang, Helin
  2014-09-18  4:55   ` Malveeka Tewari
  0 siblings, 1 reply; 5+ messages in thread
From: Zhang, Helin @ 2014-09-18  1:01 UTC (permalink / raw)
  To: Malveeka Tewari, dev

Hi Malveeka

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Malveeka Tewari
> Sent: Thursday, September 18, 2014 6:51 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Maximum possible throughput with the KNI DPDK
> Application
> 
> Hi all
> 
> I've been playing the with DPDK API to send out packets using the l2fwd app
> and the Kernel Network Interface app with a single Intel 82599 NIC on an Intel
> Xeon E5-2630
> 
> With the l2fwd application, I've been able to achieve 14.88 Mpps with minimum
> sized packets.
> However, running iperf with the KNI application only gives  me only ~7Gb/s
> peak throughput.

KNI is quite different from other DPDK applications, it is not for fast path forwarding. As it will pass the packets received in user space to kernel space, and possible the kernel stack. So don't expect too much higher performance. I think 7Gb/s might be a good enough data, what's your real use case of KNI?

> 
> Has anyone achieved the 10Gb/s line rate with the KNI application?
> Any help would be greatly appreciated!
> 
> Thanks!
> Malveeka

Regards,
Helin

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application
  2014-09-18  1:01 ` Zhang, Helin
@ 2014-09-18  4:55   ` Malveeka Tewari
       [not found]     ` <F35DEAC7BCE34641BA9FAC6BCA4A12E70A793CEF@SHSMSX104.ccr.corp.intel.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Malveeka Tewari @ 2014-09-18  4:55 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

Thanks Helin!

I am actually working on a project to quantify the overhead of user-space
to kernel-space data copying in case of conventional socket based
applications.
My understanding is that the KNI application involves userspace -> kernel
space -> user-space data copy again to send to the igb_uio driver.
I wanted to find out if the 7Gb/s throughput is the maximum throughput
achievable by the KNI application or if someone has been able to achiever
higher rates by using more cores or some other configuration.

Regards,
Malveeka




On Wed, Sep 17, 2014 at 6:01 PM, Zhang, Helin <helin.zhang@intel.com> wrote:

> Hi Malveeka
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Malveeka Tewari
> > Sent: Thursday, September 18, 2014 6:51 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] Maximum possible throughput with the KNI DPDK
> > Application
> >
> > Hi all
> >
> > I've been playing the with DPDK API to send out packets using the l2fwd
> app
> > and the Kernel Network Interface app with a single Intel 82599 NIC on an
> Intel
> > Xeon E5-2630
> >
> > With the l2fwd application, I've been able to achieve 14.88 Mpps with
> minimum
> > sized packets.
> > However, running iperf with the KNI application only gives  me only
> ~7Gb/s
> > peak throughput.
>
> KNI is quite different from other DPDK applications, it is not for fast
> path forwarding. As it will pass the packets received in user space to
> kernel space, and possible the kernel stack. So don't expect too much
> higher performance. I think 7Gb/s might be a good enough data, what's your
> real use case of KNI?
>
> >
> > Has anyone achieved the 10Gb/s line rate with the KNI application?
> > Any help would be greatly appreciated!
> >
> > Thanks!
> > Malveeka
>
> Regards,
> Helin
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application
       [not found]       ` <CAFpzwwOQFPuZ1h4pZeNGp=aEWFGayp14VYoa98qLM51HtQEeYA@mail.gmail.com>
@ 2014-09-18 23:15         ` Malveeka Tewari
  2014-09-19  2:59           ` Zhang, Helin
  0 siblings, 1 reply; 5+ messages in thread
From: Malveeka Tewari @ 2014-09-18 23:15 UTC (permalink / raw)
  To: Zhang, Helin, dev

[+dev@dpdk.org]

Sure, I understand that.
The 7Gb/s performance with iperf that I was getting was with one end-host
using the KNI  app and the other host running the traditional linux stack.
With both end hosts running the KNI app, I see about 2.75Gb/s which is
understandable because the TSO/LRO and other hardware NIC features are
turned off.

I have another related question.
Is it possible to use multiple traffic queues with the KNI app?
I tried created different queues using tc for the vEth0_0 device but that
gave me an error.

>$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq
>$ RTNETLINK answers: Operation not supported

If I wanted to add support for multiple tc queues with the KNI app, where
should I start making my changes?
I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I
can add support for different queues for the KNI app.
Any pointers would be extremely helpful.

Thanks!

On Thu, Sep 18, 2014 at 3:28 PM, Malveeka Tewari <malveeka@gmail.com> wrote:

> Sure, I understand that.
> The 7Gb/s performance with iperf that I was getting was with one end-host
> using the DPDK framework and the other host running the traditional linux
> stack.
> With both end hosts using DPDK, I see about 2.75Gb/s which is
> understandable because the TSO/LRO and other hardware NIC features are
> turned off.
>
> I have another KNI related question.
> Is it possible to use multiple traffic queues with the KNI app?
> I tried created different queues using tc for the vEth0_0 device but that
> gave me an error.
>
> >$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq
> >$ RTNETLINK answers: Operation not supported
>
> If I wanted to add support for multiple tc queues with the KNI app, where
> should I start making my changes?
> I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I
> can add support for different queues for the KNI app.
> Any pointers would be extremely helpful.
>
> Thanks!
> Malveeka
>
> On Wed, Sep 17, 2014 at 10:47 PM, Zhang, Helin <helin.zhang@intel.com>
> wrote:
>
>>  Hi Malveeka
>>
>>
>>
>> KNI loopback function can provide good enough performance, and more
>> queues/threads can provide better performance. For formal KNI, it needs to
>> talk with kernel stack and bridge, etc., the performance bottle neck is not
>> in DPDK part anymore. You can try more queues/threads to see if performance
>> is better. But do not expect too much!
>>
>>
>>
>> Regards,
>>
>> Helin
>>
>>
>>
>> *From:* Malveeka Tewari [mailto:malveeka@gmail.com]
>> *Sent:* Thursday, September 18, 2014 12:56 PM
>> *To:* Zhang, Helin
>> *Cc:* dev@dpdk.org
>> *Subject:* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK
>> Application
>>
>>
>>
>> Thanks Helin!
>>
>>
>>
>> I am actually working on a project to quantify the overhead of user-space
>> to kernel-space data copying in case of conventional socket based
>> applications.
>>
>> My understanding is that the KNI application involves userspace -> kernel
>> space -> user-space data copy again to send to the igb_uio driver.
>>
>> I wanted to find out if the 7Gb/s throughput is the maximum throughput
>> achievable by the KNI application or if someone has been able to achiever
>> higher rates by using more cores or some other configuration.
>>
>>
>>
>> Regards,
>>
>> Malveeka
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Sep 17, 2014 at 6:01 PM, Zhang, Helin <helin.zhang@intel.com>
>> wrote:
>>
>> Hi Malveeka
>>
>> > -----Original Message-----
>> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Malveeka Tewari
>> > Sent: Thursday, September 18, 2014 6:51 AM
>> > To: dev@dpdk.org
>> > Subject: [dpdk-dev] Maximum possible throughput with the KNI DPDK
>> > Application
>> >
>> > Hi all
>> >
>> > I've been playing the with DPDK API to send out packets using the l2fwd
>> app
>> > and the Kernel Network Interface app with a single Intel 82599 NIC on
>> an Intel
>> > Xeon E5-2630
>> >
>> > With the l2fwd application, I've been able to achieve 14.88 Mpps with
>> minimum
>> > sized packets.
>> > However, running iperf with the KNI application only gives  me only
>> ~7Gb/s
>> > peak throughput.
>>
>> KNI is quite different from other DPDK applications, it is not for fast
>> path forwarding. As it will pass the packets received in user space to
>> kernel space, and possible the kernel stack. So don't expect too much
>> higher performance. I think 7Gb/s might be a good enough data, what's your
>> real use case of KNI?
>>
>> >
>> > Has anyone achieved the 10Gb/s line rate with the KNI application?
>> > Any help would be greatly appreciated!
>> >
>> > Thanks!
>> > Malveeka
>>
>> Regards,
>> Helin
>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application
  2014-09-18 23:15         ` Malveeka Tewari
@ 2014-09-19  2:59           ` Zhang, Helin
  0 siblings, 0 replies; 5+ messages in thread
From: Zhang, Helin @ 2014-09-19  2:59 UTC (permalink / raw)
  To: Malveeka Tewari, dev

Hi

Sure, multiple queues can be used in any KNI app, actually current “KNI example app = l2fwd app + kni support”. So you can do in KNI app of what can be done in l2fwd. But you might need to try if it really works with multiple queues in current example KNI app, as I did not test it as that.
Actually KNI library just provides a way to exchange packets between kernel space and user space, no matter how the packets are received and transmitted in user space.

Regards,
Helin

From: Malveeka Tewari [mailto:malveeka@gmail.com]
Sent: Friday, September 19, 2014 7:15 AM
To: Zhang, Helin; dev@dpdk.org
Subject: Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application

[+dev@dpdk.org<mailto:dev@dpdk.org>]

Sure, I understand that.
The 7Gb/s performance with iperf that I was getting was with one end-host using the KNI  app and the other host running the traditional linux stack.
With both end hosts running the KNI app, I see about 2.75Gb/s which is understandable because the TSO/LRO and other hardware NIC features are turned off.

I have another related question.
Is it possible to use multiple traffic queues with the KNI app?
I tried created different queues using tc for the vEth0_0 device but that gave me an error.

>$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq
>$ RTNETLINK answers: Operation not supported

If I wanted to add support for multiple tc queues with the KNI app, where should I start making my changes?
I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I can add support for different queues for the KNI app.
Any pointers would be extremely helpful.

Thanks!

On Thu, Sep 18, 2014 at 3:28 PM, Malveeka Tewari <malveeka@gmail.com<mailto:malveeka@gmail.com>> wrote:
Sure, I understand that.
The 7Gb/s performance with iperf that I was getting was with one end-host using the DPDK framework and the other host running the traditional linux stack.
With both end hosts using DPDK, I see about 2.75Gb/s which is understandable because the TSO/LRO and other hardware NIC features are turned off.

I have another KNI related question.
Is it possible to use multiple traffic queues with the KNI app?
I tried created different queues using tc for the vEth0_0 device but that gave me an error.

>$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq
>$ RTNETLINK answers: Operation not supported

If I wanted to add support for multiple tc queues with the KNI app, where should I start making my changes?
I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I can add support for different queues for the KNI app.
Any pointers would be extremely helpful.

Thanks!
Malveeka

On Wed, Sep 17, 2014 at 10:47 PM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
Hi Malveeka

KNI loopback function can provide good enough performance, and more queues/threads can provide better performance. For formal KNI, it needs to talk with kernel stack and bridge, etc., the performance bottle neck is not in DPDK part anymore. You can try more queues/threads to see if performance is better. But do not expect too much!

Regards,
Helin

From: Malveeka Tewari [mailto:malveeka@gmail.com<mailto:malveeka@gmail.com>]
Sent: Thursday, September 18, 2014 12:56 PM
To: Zhang, Helin
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application

Thanks Helin!

I am actually working on a project to quantify the overhead of user-space to kernel-space data copying in case of conventional socket based applications.
My understanding is that the KNI application involves userspace -> kernel space -> user-space data copy again to send to the igb_uio driver.
I wanted to find out if the 7Gb/s throughput is the maximum throughput achievable by the KNI application or if someone has been able to achiever higher rates by using more cores or some other configuration.

Regards,
Malveeka




On Wed, Sep 17, 2014 at 6:01 PM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
Hi Malveeka

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>] On Behalf Of Malveeka Tewari
> Sent: Thursday, September 18, 2014 6:51 AM
> To: dev@dpdk.org<mailto:dev@dpdk.org>
> Subject: [dpdk-dev] Maximum possible throughput with the KNI DPDK
> Application
>
> Hi all
>
> I've been playing the with DPDK API to send out packets using the l2fwd app
> and the Kernel Network Interface app with a single Intel 82599 NIC on an Intel
> Xeon E5-2630
>
> With the l2fwd application, I've been able to achieve 14.88 Mpps with minimum
> sized packets.
> However, running iperf with the KNI application only gives  me only ~7Gb/s
> peak throughput.

KNI is quite different from other DPDK applications, it is not for fast path forwarding. As it will pass the packets received in user space to kernel space, and possible the kernel stack. So don't expect too much higher performance. I think 7Gb/s might be a good enough data, what's your real use case of KNI?

>
> Has anyone achieved the 10Gb/s line rate with the KNI application?
> Any help would be greatly appreciated!
>
> Thanks!
> Malveeka

Regards,
Helin




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-09-19  2:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-17 22:50 [dpdk-dev] Maximum possible throughput with the KNI DPDK Application Malveeka Tewari
2014-09-18  1:01 ` Zhang, Helin
2014-09-18  4:55   ` Malveeka Tewari
     [not found]     ` <F35DEAC7BCE34641BA9FAC6BCA4A12E70A793CEF@SHSMSX104.ccr.corp.intel.com>
     [not found]       ` <CAFpzwwOQFPuZ1h4pZeNGp=aEWFGayp14VYoa98qLM51HtQEeYA@mail.gmail.com>
2014-09-18 23:15         ` Malveeka Tewari
2014-09-19  2:59           ` Zhang, Helin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).