From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vc0-f174.google.com (mail-vc0-f174.google.com [209.85.220.174]) by dpdk.org (Postfix) with ESMTP id 03288B347 for ; Fri, 19 Sep 2014 01:09:26 +0200 (CEST) Received: by mail-vc0-f174.google.com with SMTP id hy10so1360314vcb.5 for ; Thu, 18 Sep 2014 16:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=tbCtYXYgx04OLEBQDZ3lKrF3IusFy+xA8JQzJXnz6cc=; b=DXEbFYsZqKPmPvKnZ4UCt4qdRVcGhisK4sjgRKv7ph/1RJZu5orCYfjeFqx0JNC8Cz K3BF4ZG5k//Plt9WUYydiejrZ/1mtnyGdyHoYJab9B+1hVhwBREslJ6TMrJ94V6rhDmC p6Fni/joBz4QT3Glzdgt36nbbIRvySMVzak8GDiwYNm9KS+mfsO94TjEUs8dwYpa8lqZ 7cOwddLUd04ESkkqizbbzjJ39FIam3HLYTX9ywN8S0yo4YEaVtXmVANaojFiBbcJ1ILG xCNPbLaBMxm+helLDzXVcdE1avBfELBTgN1uMLQqiEfH/ZoHdOJKcTl1zzQ3gGtr2x1O Wt6A== MIME-Version: 1.0 X-Received: by 10.220.184.70 with SMTP id cj6mr1821253vcb.5.1411082113477; Thu, 18 Sep 2014 16:15:13 -0700 (PDT) Received: by 10.52.175.35 with HTTP; Thu, 18 Sep 2014 16:15:13 -0700 (PDT) In-Reply-To: References: Date: Thu, 18 Sep 2014 16:15:13 -0700 Message-ID: From: Malveeka Tewari To: "Zhang, Helin" , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK Application X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Sep 2014 23:09:27 -0000 [+dev@dpdk.org] Sure, I understand that. The 7Gb/s performance with iperf that I was getting was with one end-host using the KNI app and the other host running the traditional linux stack. With both end hosts running the KNI app, I see about 2.75Gb/s which is understandable because the TSO/LRO and other hardware NIC features are turned off. I have another related question. Is it possible to use multiple traffic queues with the KNI app? I tried created different queues using tc for the vEth0_0 device but that gave me an error. >$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq >$ RTNETLINK answers: Operation not supported If I wanted to add support for multiple tc queues with the KNI app, where should I start making my changes? I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I can add support for different queues for the KNI app. Any pointers would be extremely helpful. Thanks! On Thu, Sep 18, 2014 at 3:28 PM, Malveeka Tewari wrote: > Sure, I understand that. > The 7Gb/s performance with iperf that I was getting was with one end-host > using the DPDK framework and the other host running the traditional linux > stack. > With both end hosts using DPDK, I see about 2.75Gb/s which is > understandable because the TSO/LRO and other hardware NIC features are > turned off. > > I have another KNI related question. > Is it possible to use multiple traffic queues with the KNI app? > I tried created different queues using tc for the vEth0_0 device but that > gave me an error. > > >$ sudo tc qdisc add dev vEth0_0 root handle 1: multiq > >$ RTNETLINK answers: Operation not supported > > If I wanted to add support for multiple tc queues with the KNI app, where > should I start making my changes? > I looked at the "lib/librte_kni/rte_kni_fifo.h" but it wasn't clear how I > can add support for different queues for the KNI app. > Any pointers would be extremely helpful. > > Thanks! > Malveeka > > On Wed, Sep 17, 2014 at 10:47 PM, Zhang, Helin > wrote: > >> Hi Malveeka >> >> >> >> KNI loopback function can provide good enough performance, and more >> queues/threads can provide better performance. For formal KNI, it needs to >> talk with kernel stack and bridge, etc., the performance bottle neck is not >> in DPDK part anymore. You can try more queues/threads to see if performance >> is better. But do not expect too much! >> >> >> >> Regards, >> >> Helin >> >> >> >> *From:* Malveeka Tewari [mailto:malveeka@gmail.com] >> *Sent:* Thursday, September 18, 2014 12:56 PM >> *To:* Zhang, Helin >> *Cc:* dev@dpdk.org >> *Subject:* Re: [dpdk-dev] Maximum possible throughput with the KNI DPDK >> Application >> >> >> >> Thanks Helin! >> >> >> >> I am actually working on a project to quantify the overhead of user-space >> to kernel-space data copying in case of conventional socket based >> applications. >> >> My understanding is that the KNI application involves userspace -> kernel >> space -> user-space data copy again to send to the igb_uio driver. >> >> I wanted to find out if the 7Gb/s throughput is the maximum throughput >> achievable by the KNI application or if someone has been able to achiever >> higher rates by using more cores or some other configuration. >> >> >> >> Regards, >> >> Malveeka >> >> >> >> >> >> >> >> >> >> On Wed, Sep 17, 2014 at 6:01 PM, Zhang, Helin >> wrote: >> >> Hi Malveeka >> >> > -----Original Message----- >> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Malveeka Tewari >> > Sent: Thursday, September 18, 2014 6:51 AM >> > To: dev@dpdk.org >> > Subject: [dpdk-dev] Maximum possible throughput with the KNI DPDK >> > Application >> > >> > Hi all >> > >> > I've been playing the with DPDK API to send out packets using the l2fwd >> app >> > and the Kernel Network Interface app with a single Intel 82599 NIC on >> an Intel >> > Xeon E5-2630 >> > >> > With the l2fwd application, I've been able to achieve 14.88 Mpps with >> minimum >> > sized packets. >> > However, running iperf with the KNI application only gives me only >> ~7Gb/s >> > peak throughput. >> >> KNI is quite different from other DPDK applications, it is not for fast >> path forwarding. As it will pass the packets received in user space to >> kernel space, and possible the kernel stack. So don't expect too much >> higher performance. I think 7Gb/s might be a good enough data, what's your >> real use case of KNI? >> >> > >> > Has anyone achieved the 10Gb/s line rate with the KNI application? >> > Any help would be greatly appreciated! >> > >> > Thanks! >> > Malveeka >> >> Regards, >> Helin >> >> >> > >