From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw0-f182.google.com (mail-yw0-f182.google.com [209.85.161.182]) by dpdk.org (Postfix) with ESMTP id 578DC5AAE for ; Tue, 17 May 2016 19:09:05 +0200 (CEST) Received: by mail-yw0-f182.google.com with SMTP id x194so22187638ywd.0 for ; Tue, 17 May 2016 10:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=GPCQxeQan0IcUdSvJTncDnlDgLQEjCslR3AxtZWFg9M=; b=Mf2lbt9R6Ym46edfi7AmYgUuPyV/Z+LenecqwYQg90n9unqTGhPerqeiWELaUMdJVJ nqz2DYq3cpTYBTYbA1GrBYnHhNPA3jbsau71nt9bG9t2RWWRtmbPlq9AR/MFLbcT9MmY f8K1rprsH5yj2pyZxUkbcakFhZ0trxoSXPCjWbaoOWRCL/NInoq5Ml35g+IuMZet3gZ4 7sh9/BndXYp8kD++eVaPNi0WWzXUooVtW8UGkv2cTX6WTxwTBuO2z49oE6Q3QFUeL8Xx qKqEUAmjXjJp9sZXtplpNfPi3nRbTB/lc02bN46G3pq7bihh0QzWVkpJXLuiIPdZM1Vh S9OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=GPCQxeQan0IcUdSvJTncDnlDgLQEjCslR3AxtZWFg9M=; b=Gx4P+eNLhgoVyRaI4/8Ul53ml7ElZ83e2yYWhWhg3L7ZicaHKiO7eBvg3spP19/Bri C9yrBcYuKo0fA+LoGmgGzHFnt8p4hcJ2WYUcpg1z+iX3vsaGyHb8Vmbd7VOQPLTr3Rz9 oEvEwzb+MT+UdB8njojxMUqWbxZ1qnweemueK1P+WvATjhJTZ8b8S9P4MW+ojEWXH9nd ++3xPThBGOSMeDDL7Xer79dVWb4loULAXiYkBiCTbEE4kxFJ8MOyeXzPxUAHJGECLmJ6 QKPugjS0VwIwbbl0LyQxTVEf0bbd/+cfrezNBOU4ol3SIMfERFqbobUku0PJWYdruCIT 3tUw== X-Gm-Message-State: AOPr4FUlW37nBAcrSDpMLXpqv5udIvjJ4FJvf/zGyr/cUqAwRS+ij/9AMslTkUisvZjAMJqrMdxvkESubxfR/w== MIME-Version: 1.0 X-Received: by 10.13.222.134 with SMTP id h128mr1380035ywe.299.1463504944728; Tue, 17 May 2016 10:09:04 -0700 (PDT) Received: by 10.129.70.11 with HTTP; Tue, 17 May 2016 10:09:04 -0700 (PDT) In-Reply-To: <573AED5F.3080105@intel.com> References: <5739ACFF.4000506@intel.com> <573AED5F.3080105@intel.com> Date: Tue, 17 May 2016 10:09:04 -0700 Message-ID: From: ALeX Wang To: Ferruh Yigit Cc: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] possible kni bug and proposed fix X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 17 May 2016 17:09:05 -0000 On 17 May 2016 at 03:07, Ferruh Yigit wrote: > On 5/16/2016 4:31 PM, ALeX Wang wrote: > > Hi Ferruh, > > > > Thx for pointing out the 'fill alloc_q with these mubfs _until it gets > > full_.', > > > > I saw the size of all queues are 'KNI_FIFO_COUNT_MAX (1024)'... > > The corresponding memory required is more than what I specify as > > 'socket_mem' (since i'm using VM)... > > > > If the mempool is not big enough to fill the ring, this explains the > error log. Logic is to fill the alloc_q, but if you are missing the > required mbufs, in each rte_kni_rx_burst() it will complain about memory. > > But the required memory for mbufs to fill the ring is not too much. It > should be ~2Mbytes, are you sure you are missing this much memory? > > Thx for reminding me about this, by default, i only allocate 2048 mbufs for my pool... once i raise it to 4096, the issue is gone... will the `KNI_FIFO_COUNT_MAX ` ever change? I want to try adding some documentation,... > > Also, in my use case, I only `tcpreplay` through the kni interface, and > > my application only do rx and then free the 'mbufs'. So there is no tx > > at all. > > > > So, in my case, I still think this is a bug/defect, or somewhere i still > > misunderstand, > > > > P.S. The description here seems to be inverted, > > > http://dpdk.org/doc/api/rte__kni_8h.html#a0cdd727cdc227d005fef22c0189f3dfe > > 'rte_kni_rx_burst' does the 'It handles allocating the mbufs for KNI > > interface alloc queue.' > > > > You are right. That part of the description for rte_kni_rx_burst and > rte_kni_tx_burst needs to be replaced. Do you want to send a patch? > > Sure, will do that, Thanks, Alex Wang, > > Thanks, > > Alex Wang, > > > > On 16 May 2016 at 04:20, Ferruh Yigit > > wrote: > > > > On 5/15/2016 5:48 AM, ALeX Wang wrote: > > > Hi, > > > > > > When using the kni module to test my application inside > > > debian (virtualbox) VM (kernel version 4.4), I get the > > > > > > "KNI: Out of memory" > > > > > > from syslog every time I `tcpreply` packets through > > > the kni interface. > > > > > > After checking source code, I saw that when I call > > > 'rte_kni_rx_burst()', no matter how many packets > > > are actually retrieved, we always call 'kni_allocate_mbufs()' > > > and try allocate 'MAX_MBUF_BURST_NUM' more > > > mbufs... I fix the issue via using this patch below, > > > > > > Could you confirm if this is an actual bug? > > > > > > > Hi Alex, > > > > I don't think this is a bug. > > > > kni_allocate_mbufs() will allocate MAX_MBUF_BURST_NUM mbufs as you > > mentioned. And will fill alloc_q with these mubfs _until it gets > full_. > > And if the alloc_q is full and there are remaining mbufs, they will > be > > freed. So for some cases this isn't the most optimized way, but > there is > > no defect. > > > > Since you are getting "KNI: Out of memory", somewhere else can be > > missing freeing mbufs. > > > > mbufs freeing done in rte_kni_tx_burst(), I can guess two cases that > can > > cause problem: > > a) not calling rte_kni_tx_burst() frequent, so that all free mbufs > > consumed. > > b) calling rte_kni_tx_burst() with number of mbufs bigger than > > MAX_MBUF_BURST_NUM, because this function frees at most > > MAX_MBUF_BURST_NUM of mbufs, but if you are calling calling > > rte_kni_tx_burst() with bigger numbers, this will cause mbufs to > stuck > > in free_q > > > > > > Regards, > > ferruh > > > > > > > > > > > > -- > > Alex Wang, > > Open vSwitch developer > > -- Alex Wang, Open vSwitch developer