From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f65.google.com (mail-wm0-f65.google.com [74.125.82.65]) by dpdk.org (Postfix) with ESMTP id 1BC694CE4 for ; Thu, 15 Dec 2016 18:24:28 +0100 (CET) Received: by mail-wm0-f65.google.com with SMTP id g23so7856184wme.1 for ; Thu, 15 Dec 2016 09:24:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ZgCGinqViicpNBPulQH8xqV7cLY4e3dqCN/EADd2N3s=; b=MrmaF25iqaL2QSbTPil3fxs8aY8MSzw2FN6wr9IT8g7lkcF2xxuCh0qph/ALLhnBPk +P59MbgMy451vI4M0rKggWoJVXrdhM6rCnmqmc5bCLV5Hik3+GieuCEfl5QIhpJQpo1c SZxV36Cmnv54ZPDi5BQFVf0LHdU5iGKJvJ+fBeYYr/MuRENURyvmhEboYR+s5vfspCPe 3W+Io81hl/UaNDWqf3e1jhwQFiwlWdS5pFb1ttDD/Kpdlj+XYGMuowHw1kkavldyolCc TOfpcfT2bFP9un/zPpm2Jgs3vOTaj12N2vDdAPHeVOTX0ggk8pUHm1qDBFtrjSwhuAPm pQ0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ZgCGinqViicpNBPulQH8xqV7cLY4e3dqCN/EADd2N3s=; b=LpEAZ6pZAqfMKc8pC1x0I8yX9886o5JdBf8kZ5ATlbBhZvdHIF/HEZKk6aODW+08FG dhXYFt32V5lSQgKn2HJbZmykhUHPJPbCVy6sIykyuMcoF3nEmsAEKAUsI+Mdf1j7/3YS 2U/QtR9KUDKDigHUWB49aqwWQlQTWjoGBzxB8yxPvjeW5IdM93xbJYnVAh+tPsZrD0GG EjYQK43F++JiIASDF7p6l1AgW+qWNBlj8ZZ92fZTh7Rd8wrokh4921SHKDg91MqPKJvF fG+hSXbWrjLt5SeqiNiVrbCIeMcXUxviRds5Xnk3vQobMw7DPtKJmciusXWxsOBYR7uG QvAw== X-Gm-Message-State: AIkVDXJdoMWeiXLup1LFWbx4sTHgQlUCezdrKk2zT3aNlzbzEOvwD6pdK7NPUQ/odwtz1SeE9u8OO6FGA3q5kg== X-Received: by 10.28.10.147 with SMTP id 141mr2409900wmk.65.1481822667799; Thu, 15 Dec 2016 09:24:27 -0800 (PST) MIME-Version: 1.0 Received: by 10.80.175.70 with HTTP; Thu, 15 Dec 2016 09:24:27 -0800 (PST) In-Reply-To: <88A92D351643BA4CB23E30315517062662F3C939@SHSMSX103.ccr.corp.intel.com> References: <88A92D351643BA4CB23E30315517062662F3C939@SHSMSX103.ccr.corp.intel.com> From: edgar helmut Date: Thu, 15 Dec 2016 19:24:27 +0200 Message-ID: To: "Hu, Xuekun" Cc: "Wiles, Keith" , "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Dec 2016 17:24:28 -0000 in fact the vm was created with 6G RAM, its kernel boot args are defined with 4 hugepages of 1G each, though when starting the vm i noted that anonhugepages increased. The relevant qemu process id is 6074, and the following sums the amount of allocated AnonHugePages: sudo grep -e AnonHugePages /proc/6074/smaps | awk '{ if($2>0) print $2} '|awk '{s+=3D$1} END {print s}' which results with 4360192 so not all the memory is backed with transparent hugepages though it is more than the amount of hugepages the guest supposed to boot with. How can I be sure that the required 4G hugepages are really allocated?, and not, for example, only 2G out of the 4G are allocated (and the rest 2 are mapping of the default 4K)? thanks On Thu, Dec 15, 2016 at 4:33 PM, Hu, Xuekun wrote: > Are you sure the anonhugepages size was equal to the total VM's memory > size? > Sometimes, transparent huge page mechanism doesn't grantee the app is usi= ng > the real huge pages. > > > -----Original Message----- > From: users [mailto:users-bounces@dpdk.org] On Behalf Of edgar helmut > Sent: Thursday, December 15, 2016 9:32 PM > To: Wiles, Keith > Cc: users@dpdk.org > Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine > > I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @ > 2.40GHz. > > I just made two more steps: > 1. setting iommu=3Dpt for better usage of the igb_uio > 2. using taskset and isolcpu so now it looks like the relevant dpdk cores > use dedicated cores. > > It improved the performance though I still see significant difference > between the vm and the host which I can't fully explain. > > any further idea? > > Regards, > Edgar > > > On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith > wrote: > > > > > > On Dec 15, 2016, at 1:20 AM, edgar helmut > > wrote: > > > > > > Hi. > > > Some help is needed to understand performance issue on virtual machin= e. > > > > > > Running testpmd over the host functions well (testpmd forwards 10g > > between > > > two 82599 ports). > > > However same application running on a virtual machine over same host > > > results with huge degradation in performance. > > > The testpmd then is not even able to read 100mbps from nic without > drops, > > > and from a profile i made it looks like a dpdk application runs more > than > > > 10 times slower than over host=E2=80=A6 > > > > Not sure I understand the overall setup, but did you make sure the > NIC/PCI > > bus is on the same socket as the VM. If you have multiple sockets on yo= ur > > platform. If you have to access the NIC across the QPI it could explain > > some of the performance drop. Not sure that much drop is this problem. > > > > > > > > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest. > > > Qemu is 2.3.0 (though I tried with a newer as well). > > > NICs are connected to guest using pci passthrough, and guest's cpu is > set > > > as passthrough (same as host). > > > On guest start the host allocates transparent hugepages (AnonHugePage= s) > > so > > > i assume the guest memory is backed with real hugepages on the host. > > > I tried binding with igb_uio and with uio_pci_generic but both result= s > > with > > > same performance. > > > > > > Due to the performance difference i guess i miss something. > > > > > > Please advise what may i miss here? > > > Is this a native penalty of qemu?? > > > > > > Thanks > > > Edgar > > > > Regards, > > Keith > > > > >