From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <ferruh.yigit@intel.com>
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by dpdk.org (Postfix) with ESMTP id 8418C10A7
 for <dev@dpdk.org>; Wed,  7 Jun 2017 19:20:24 +0200 (CEST)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 07 Jun 2017 10:20:22 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.39,311,1493708400"; d="scan'208";a="271399955"
Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.91])
 ([10.237.220.91])
 by fmsmga004.fm.intel.com with ESMTP; 07 Jun 2017 10:20:20 -0700
To: gowrishankar muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Cc: dev@dpdk.org
References: <1494502172-16950-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com>
 <1494503486-20876-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com>
 <6466d914-f47f-1ecc-6fec-656893457663@intel.com>
 <c1609a88-79e7-3ed9-424d-22469ab58f28@linux.vnet.ibm.com>
 <82c75bc8-644a-1aa9-4a6b-60061633108d@intel.com>
 <d9529d95-1608-129d-a72c-1f71e4cac801@linux.vnet.ibm.com>
 <625f5ec7-2a01-eb93-684d-685417e8003a@intel.com>
 <079635d4-1e78-29a9-180d-f53c4653bbc8@linux.vnet.ibm.com>
From: Ferruh Yigit <ferruh.yigit@intel.com>
Message-ID: <00d67e4c-a5a0-9f87-30d4-b0c16c988cb3@intel.com>
Date: Wed, 7 Jun 2017 18:20:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.1.1
MIME-Version: 1.0
In-Reply-To: <079635d4-1e78-29a9-180d-f53c4653bbc8@linux.vnet.ibm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH v2] kni: add new mbuf in alloc_q only based
 on its empty slots
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 07 Jun 2017 17:20:25 -0000

On 6/6/2017 3:43 PM, gowrishankar muthukrishnan wrote:
> Hi Ferruh,
> Just wanted to check with you on the verdict of this patch, whether we 
> are waiting for
> any objection/ack ?.

I was waiting for more comment, I will ack explicitly.

> 
> Thanks,
> Gowrishankar
> 
> On Thursday 01 June 2017 02:48 PM, Ferruh Yigit wrote:
>> On 6/1/2017 6:56 AM, gowrishankar muthukrishnan wrote:
>>> Hi Ferruh,
>>>
>>> On Wednesday 31 May 2017 09:51 PM, Ferruh Yigit wrote:
>>> <cut>
>>>> I have sampled below data in x86_64 for KNI on ixgbe pmd. iperf server
>>>>> runs on
>>>>> remote interface connecting PMD and iperf client runs on KNI interface,
>>>>> so as to
>>>>> create more egress from KNI into DPDK (w/o and with this patch) for 1MB and
>>>>> 100MB data. rx and tx stats are from kni app (USR1).
>>>>>
>>>>> 100MB w/o patch 1.28Gbps
>>>>> rx      tx        alloc_call  alloc_call_mt1tx freembuf_call
>>>>> 3933 72464 51042      42472              1560540
>>>> Some math:
>>>>
>>>> alloc called 51042 times with allocating 32 mbufs each time,
>>>> 51042 * 32 = 1633344
>>>>
>>>> freed mbufs: 1560540
>>>>
>>>> used mbufs: 1633344 - 1560540 = 72804
>>>>
>>>> 72804 =~ 72464, so looks correct.
>>>>
>>>> Which means rte_kni_rx_burst() called 51042 times and 72464 buffers
>>>> received.
>>>>
>>>> As you already mentioned, for each call kernel able to put only 1-2
>>>> packets into the fifo. This number is close to 3 for my test with KNI PMD.
>>>>
>>>> And for this case, agree your patch looks reasonable.
>>>>
>>>> But what if kni has more egress traffic, that able to put >= 32 packets
>>>> between each rte_kni_rx_burst()?
>>>> For that case this patch introduces extra cost to get allocq_free count.
>>> Are there case(s) we see kernel thread writing txq faster at a rate
>>> higher than kni application
>>> could dequeue it ?. In my understanding, KNI is suppose to be a slow
>>> path as it puts
>>> packets back into network stack (control plane ?).
>> Kernel thread doesn't need to be faster than what app can dequeue,  it
>> is enough if kernel thread can put 32 or more packets for this case, but
>> I see this goes to same place.
>>
>> And for kernel multi-thread mode, each kernel thread has more time to
>> enqueue packets, although I don't have the numbers.
>>
>>> Regards,
>>> Gowrishankar
>>>
>>>> Overall I am not disagree with patch, but I have concern if this would
>>>> cause performance loss some cases while making better for this one. That
>>>> would help a lot if KNI users test and comment.
>>>>
>>>> For me, applying patch didn't give any difference in final performance
>>>> numbers, but if there is no objection, I am OK to get this patch.
>>>>
>>>>
>>> <cut>
>>>
>