From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 0966F568A for ; Thu, 1 Jun 2017 11:18:55 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP; 01 Jun 2017 02:18:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,279,1493708400"; d="scan'208";a="93565003" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.81]) ([10.237.220.81]) by orsmga004.jf.intel.com with ESMTP; 01 Jun 2017 02:18:54 -0700 To: gowrishankar muthukrishnan Cc: dev@dpdk.org References: <1494502172-16950-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1494503486-20876-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <6466d914-f47f-1ecc-6fec-656893457663@intel.com> <82c75bc8-644a-1aa9-4a6b-60061633108d@intel.com> From: Ferruh Yigit Message-ID: <625f5ec7-2a01-eb93-684d-685417e8003a@intel.com> Date: Thu, 1 Jun 2017 10:18:53 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2] kni: add new mbuf in alloc_q only based on its empty slots X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Jun 2017 09:18:57 -0000 On 6/1/2017 6:56 AM, gowrishankar muthukrishnan wrote: > Hi Ferruh, > > On Wednesday 31 May 2017 09:51 PM, Ferruh Yigit wrote: > >> I have sampled below data in x86_64 for KNI on ixgbe pmd. iperf server >>> runs on >>> remote interface connecting PMD and iperf client runs on KNI interface, >>> so as to >>> create more egress from KNI into DPDK (w/o and with this patch) for 1MB and >>> 100MB data. rx and tx stats are from kni app (USR1). >>> >>> 100MB w/o patch 1.28Gbps >>> rx tx alloc_call alloc_call_mt1tx freembuf_call >>> 3933 72464 51042 42472 1560540 >> Some math: >> >> alloc called 51042 times with allocating 32 mbufs each time, >> 51042 * 32 = 1633344 >> >> freed mbufs: 1560540 >> >> used mbufs: 1633344 - 1560540 = 72804 >> >> 72804 =~ 72464, so looks correct. >> >> Which means rte_kni_rx_burst() called 51042 times and 72464 buffers >> received. >> >> As you already mentioned, for each call kernel able to put only 1-2 >> packets into the fifo. This number is close to 3 for my test with KNI PMD. >> >> And for this case, agree your patch looks reasonable. >> >> But what if kni has more egress traffic, that able to put >= 32 packets >> between each rte_kni_rx_burst()? >> For that case this patch introduces extra cost to get allocq_free count. > > Are there case(s) we see kernel thread writing txq faster at a rate > higher than kni application > could dequeue it ?. In my understanding, KNI is suppose to be a slow > path as it puts > packets back into network stack (control plane ?). Kernel thread doesn't need to be faster than what app can dequeue, it is enough if kernel thread can put 32 or more packets for this case, but I see this goes to same place. And for kernel multi-thread mode, each kernel thread has more time to enqueue packets, although I don't have the numbers. > > Regards, > Gowrishankar > >> Overall I am not disagree with patch, but I have concern if this would >> cause performance loss some cases while making better for this one. That >> would help a lot if KNI users test and comment. >> >> For me, applying patch didn't give any difference in final performance >> numbers, but if there is no objection, I am OK to get this patch. >> >> > > >