From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id A1574292D for ; Tue, 16 May 2017 19:15:42 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 May 2017 10:15:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,350,1491289200"; d="scan'208";a="88056882" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.81]) ([10.237.220.81]) by orsmga004.jf.intel.com with ESMTP; 16 May 2017 10:15:30 -0700 To: Gowrishankar Cc: dev@dpdk.org References: <1494502172-16950-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1494503486-20876-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> From: Ferruh Yigit Message-ID: <6466d914-f47f-1ecc-6fec-656893457663@intel.com> Date: Tue, 16 May 2017 18:15:29 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <1494503486-20876-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2] kni: add new mbuf in alloc_q only based on its empty slots X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 16 May 2017 17:15:43 -0000 On 5/11/2017 12:51 PM, Gowrishankar wrote: > From: Gowrishankar Muthukrishnan > > In kni_allocate_mbufs(), we attempt to add max_burst (32) count of mbuf > always into alloc_q, which is excessively leading too many rte_pktmbuf_ > free() when alloc_q is contending at high packet rate (for eg 10Gig data). > In a situation when alloc_q fifo can only accommodate very few (or zero) > mbuf, create only what needed and add in fifo. I remember I have tried similar, also tried allocating amount of nb_packets read from kernel, both produced worse performance. Can you please share your before/after performance numbers? kni_allocate_mbufs() called within rte_kni_rx_burst() if any packet received from kernel. If there is a heavy traffic, kernel will always consume the alloc_q before this function called and this function will fill it back. So there shouldn't be much cases that alloc_q fifo already full. Perhaps this can happen if application burst Rx from kernel in a number less than 32, but fifo filled with fixed 32mbufs, is this your case? Can you measure number of times rte_pktmbuf_free() called because of alloc_q is full? > > With this patch, we could stop random network stall in KNI at higher packet > rate (eg 1G or 10G data between vEth0 and PMD) sufficiently exhausting > alloc_q on above condition. I tested i40e PMD for this purpose in ppc64le. If stall happens from NIC to kernel, this is kernel receive path, and alloc_q is in kernel transmit path. > > Changes: > v2 - alloc_q free count calculation corrected. > line wrap fixed for commit message. > > Signed-off-by: Gowrishankar Muthukrishnan > --- > lib/librte_kni/rte_kni.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c > index c3f9208..9c5d485 100644 > --- a/lib/librte_kni/rte_kni.c > +++ b/lib/librte_kni/rte_kni.c > @@ -624,6 +624,7 @@ struct rte_kni * > int i, ret; > struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; > void *phys[MAX_MBUF_BURST_NUM]; > + int allocq_free; > > RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pool) != > offsetof(struct rte_kni_mbuf, pool)); > @@ -646,7 +647,9 @@ struct rte_kni * > return; > } > > - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { > + allocq_free = (kni->alloc_q->read - kni->alloc_q->write - 1) \ > + & (MAX_MBUF_BURST_NUM - 1); > + for (i = 0; i < allocq_free; i++) { > pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); > if (unlikely(pkts[i] == NULL)) { > /* Out of memory */ >