From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f177.google.com (mail-ob0-f177.google.com [209.85.214.177]) by dpdk.org (Postfix) with ESMTP id CD15A8DB1 for ; Wed, 2 Dec 2015 08:50:54 +0100 (CET) Received: by obbww6 with SMTP id ww6so26520474obb.0 for ; Tue, 01 Dec 2015 23:50:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=OCyYKpfYsKhl1nhfCIxnVE7tgwkOWLl/N76RXQh2Qg0=; b=As7SoGpVykmtw4wlCZDtsOypks+yOt9CFeo9WJDun7LM0ZlArJe0DutJKuE9pOise0 KNn6vQNolxMnYpq+7DdyyYqCLtNXQQlJRiCUvaHHjDHowYxb2ZSJXcAI674RFiIKxqW8 FRxurHkQNjnkGTVbteiRLu3Cw+159cF3/mCxsNoHViLec5+DME+RdL+QQoJdK2W3HJW6 pw0uHIZF93GwFkgcYCl3CVoX14X0832K43DCKoju/vMa9J6Yffq2k5cqtiKMf/dTPDVb 3hSNmAvRXI4qoDthVzI52OoFuWNRLUxudpq3q75H4fz7eyYrEN+u675o+iy5U4KtiRRV MT2A== MIME-Version: 1.0 X-Received: by 10.60.73.164 with SMTP id m4mr1343788oev.30.1449042654237; Tue, 01 Dec 2015 23:50:54 -0800 (PST) Received: by 10.182.109.66 with HTTP; Tue, 1 Dec 2015 23:50:54 -0800 (PST) In-Reply-To: References: Date: Wed, 2 Dec 2015 16:50:54 +0900 Message-ID: From: Zhihan Jiang To: Kyle Larose Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: users@dpdk.org Subject: Re: [dpdk-users] DPDK weird forwarding loss X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 02 Dec 2015 07:50:55 -0000 Hey Kyle, Thank you for the help. I have tried to add isolcpus=20-27 (which are the 8 cores i pinned to the VM in host) into the /etc/grub.conf (and I also tried to add isolcpus into guest machine VM) but seems like it doesn't work. I also tried to disable SMT (hyper-threading) but that just made the result worse. I did some performance test and I will list here just in case. To clarify, TX loss means the packet that dropped before reaching DPDK, FWD loss means the packet that dropped by DPDK l3fwd app. 64byte packet MPPS Loss Rate TX Loss FWD Loss 14.88 6.810% 0.018% 6.791% 14.00 1.029% 0.003% 1.000% 13.00 0.273% 0.003% 0.270% 12.00 0.211% 0.211% 0.000% 11.00 0.216% 0.216% 0.000% 10.00 0.240% 0.240% 0.000% 09.00 0.250% 0.250% 0.000% 08.00 0.248% 0.248% 0.000% The trend can pretty much be told by this stats but it really bugs me why TX Loss is decreasing while PPS increases. I was wondering if it's bug with IXIA but I tested it today the packet it sent is OK. Thank for any help. Jack On Wed, Dec 2, 2015 at 2:55 AM, Kyle Larose wrote: > Hi Jack, > > On Mon, Nov 30, 2015 at 8:42 PM, Zhihan Jiang > wrote: > > Hello, > > > > > Other settings: intel_iommu=on iommu=pt / blacklist ixgbevf on the host > > machine / pause frame off / pin all the ports to the same NUMA node & > > socket / VM uses CPU on the same NUMA node & socket. > > > > > but there is always ~0.5%- 1% packet loss > > > > The command line for l3fwd is: > > ./build/l3fwd -c6 -n4 -w [whitelist devices] --socket-mem=1024,0 -- -p3 > > --config "(0,0,1), (1,0,2)" > > > > > Try isolating the CPUs on your guest and host so that the forwarding > application cannot be preempted by something else. > > See the 'isolcpus' kernel boot parameter. > > > http://www.linuxtopia.org/online_books/linux_kernel/kernel_configuration/re46.html > > In my experience, drops that happen at low rates are caused by the > polling thread not being scheduled to receive packets often enough. > This can either be because it was preempted, which isolcpus will fix, > or because it is sleeping. IIRC l3fwd doesn't sleep, which leaves the > first case the only possibility. > > Hope that helps, > > Kyle >