From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-f179.google.com (mail-ig0-f179.google.com [209.85.213.179]) by dpdk.org (Postfix) with ESMTP id 407FB2A1A for ; Fri, 25 Sep 2015 07:03:33 +0200 (CEST) Received: by igbni9 with SMTP id ni9so2589605igb.0 for ; Thu, 24 Sep 2015 22:03:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=XzZ7Opqr/21lNkS5BloTnQiV8Lb5BP12AJ30joEmK04=; b=PbtZu03Yag8Go1QGzuSw/zz6Mk9tmI9KaHAfEV82o9285hlw4s4aWBmyc5Ny8CxkVm NHO0ZUrBPH5HN2Elzds1Q7PyvVg9CUgl3PEGoDDZ7ww+3CLNImGUIVQIQ2rfsZHD7iED IcskaJnfKa7F/BB2+2MQZlLw0HTo3KAwI6xXbjdah94iDUCh2JPwFmCGdAxOLIcy/mIx qwUCh+S+SbMx8bB2yU/G2FlYuUZEU+cEQ6U0/35DhutbuKAQkrkjAldi9JEPbV9p2TJM UDUqOSBZ12HeO8uhilU2HBPpEp2D557lsUd6DC/yxCauUNd3ianaFI7UYXlyrfzYiwaU /m3A== MIME-Version: 1.0 X-Received: by 10.50.79.229 with SMTP id m5mr619080igx.1.1443157412636; Thu, 24 Sep 2015 22:03:32 -0700 (PDT) Received: by 10.79.94.196 with HTTP; Thu, 24 Sep 2015 22:03:32 -0700 (PDT) In-Reply-To: References: Date: Fri, 25 Sep 2015 14:03:32 +0900 Message-ID: From: Moon-Sang Lee To: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] ksoftirqd when using KNI X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2015 05:03:33 -0000 I've observed CPU stats with top command, and found that ksoftirqd is processing software interrupts which might come from dpdk-kni application and would be processed by KNI and kernel net stack. My observation shows that 1. dpdk-kni-application drops a half of rx packets (i.e. fail to deliver packets to skb). this seems the rx_q is full in KNI side. I think this is because processing in KNI and IP stack is much slow and receiving packets from device via dpdk is much faster. 2. bonding multiple KNI interfaces to spread loads across multiple kernel threads does not help reduce that processing time. In addition, packets are transmitted out of order throughout multiple KNIs, which requires reordering at the communication end point. 3. NAT with native kernel performs twice better than that of KNI + native kernel even though the latter does not incur hardware interrupts. Anyway, my experiment was done in limited environment, so this does not reflect any general case. My wish for simple NAT solution seems not feasible with KNI, thus I should change my approach from KNI to pure dpdk application. On Fri, Sep 18, 2015 at 8:53 PM, Moon-Sang Lee wrote: > > I'm a newbie and testing DPDK KNI with 1G intel NIC. > > According to my understanding of DPDK documents, > KNI should not raise interrupts when sending/receiving packets. > > But when I transmit bunch of packets to my KNI ports, > 'top command' shows ksoftirqd with 50% CPU load. > > Would you give me some comments about this situation? > > > > -- > Moon-Sang Lee, SW Engineer > Email: sang0627@gmail.com > Wisdom begins in wonder. *Socrates* > -- Moon-Sang Lee, SW Engineer Email: sang0627@gmail.com Wisdom begins in wonder. *Socrates*