From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by dpdk.org (Postfix) with ESMTP id 687FB397D for ; Thu, 1 Jun 2017 07:57:10 +0200 (CEST) Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v515rZIr062182 for ; Thu, 1 Jun 2017 01:57:09 -0400 Received: from e23smtp07.au.ibm.com (e23smtp07.au.ibm.com [202.81.31.140]) by mx0b-001b2d01.pphosted.com with ESMTP id 2atcj89ju2-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 01 Jun 2017 01:57:09 -0400 Received: from localhost by e23smtp07.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 1 Jun 2017 15:57:06 +1000 Received: from d23relay08.au.ibm.com (202.81.31.227) by e23smtp07.au.ibm.com (202.81.31.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 1 Jun 2017 15:57:04 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay08.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v515uuw24325678 for ; Thu, 1 Jun 2017 15:57:04 +1000 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v515uVxb004289 for ; Thu, 1 Jun 2017 15:56:31 +1000 Received: from [9.109.222.191] ([9.109.222.191]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id v515uTY9003410; Thu, 1 Jun 2017 15:56:30 +1000 To: Ferruh Yigit Cc: dev@dpdk.org References: <1494502172-16950-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1494503486-20876-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <6466d914-f47f-1ecc-6fec-656893457663@intel.com> <82c75bc8-644a-1aa9-4a6b-60061633108d@intel.com> From: gowrishankar muthukrishnan Date: Thu, 1 Jun 2017 11:26:14 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <82c75bc8-644a-1aa9-4a6b-60061633108d@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-TM-AS-MML: disable x-cbid: 17060105-0044-0000-0000-000002637C62 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17060105-0045-0000-0000-000006F18645 Message-Id: X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-06-01_01:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706010111 Subject: Re: [dpdk-dev] [PATCH v2] kni: add new mbuf in alloc_q only based on its empty slots X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Jun 2017 05:57:10 -0000 Hi Ferruh, On Wednesday 31 May 2017 09:51 PM, Ferruh Yigit wrote: > I have sampled below data in x86_64 for KNI on ixgbe pmd. iperf server >> runs on >> remote interface connecting PMD and iperf client runs on KNI interface, >> so as to >> create more egress from KNI into DPDK (w/o and with this patch) for 1MB and >> 100MB data. rx and tx stats are from kni app (USR1). >> >> 100MB w/o patch 1.28Gbps >> rx tx alloc_call alloc_call_mt1tx freembuf_call >> 3933 72464 51042 42472 1560540 > Some math: > > alloc called 51042 times with allocating 32 mbufs each time, > 51042 * 32 = 1633344 > > freed mbufs: 1560540 > > used mbufs: 1633344 - 1560540 = 72804 > > 72804 =~ 72464, so looks correct. > > Which means rte_kni_rx_burst() called 51042 times and 72464 buffers > received. > > As you already mentioned, for each call kernel able to put only 1-2 > packets into the fifo. This number is close to 3 for my test with KNI PMD. > > And for this case, agree your patch looks reasonable. > > But what if kni has more egress traffic, that able to put >= 32 packets > between each rte_kni_rx_burst()? > For that case this patch introduces extra cost to get allocq_free count. Are there case(s) we see kernel thread writing txq faster at a rate higher than kni application could dequeue it ?. In my understanding, KNI is suppose to be a slow path as it puts packets back into network stack (control plane ?). Regards, Gowrishankar > Overall I am not disagree with patch, but I have concern if this would > cause performance loss some cases while making better for this one. That > would help a lot if KNI users test and comment. > > For me, applying patch didn't give any difference in final performance > numbers, but if there is no objection, I am OK to get this patch. > >