From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f193.google.com (mail-qt0-f193.google.com [209.85.216.193]) by dpdk.org (Postfix) with ESMTP id 5F20F2BE4 for ; Sat, 17 Dec 2016 13:56:23 +0100 (CET) Received: by mail-qt0-f193.google.com with SMTP id n34so13927935qtb.3 for ; Sat, 17 Dec 2016 04:56:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=BVPrEjITzcchZh/YOGs90BPKkyqNqpMu0LBDrKHldNA=; b=i5FWuouiZZUSHnq5L7SRUvDCFk/WiQYYzB5KIKOKWDcR7E4hpnai70btJ6woVfqjke XYwzckS7wkgHFNIpJ9lxVx2+QRBFTx/I+dn7MJSOk+USRgy3FdwGyXSAY21hCrKZ6vVH xBDrCMYcnv/+Eth9GwSc+FKJQYEB9cn0qYv4JbT1PcgGZmiKGXMonfJMv6PAlff9SPGf OCAS93qcviJLZrHRR3Pz5Twi/kcLu7W3/GutOpcWQjXo1PdtHLQWx1X8pVlP/ux3y3Dk soH+Ajo/uVj3euW1phZqp3OOPD9keI5IBeHDGuReDgEhfrPag7gFttNnDt8rbUQ15Hja EiNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=BVPrEjITzcchZh/YOGs90BPKkyqNqpMu0LBDrKHldNA=; b=VcvYD92IHVBqSi23KUp9SmaLW4ICFtMAZ68Mqw6H3URpEgeKEtWV+4ZuMc5T/MceKq /5KxRTHmGNV0Wc0RlzkR4xlBw+x2WiymBBQ+RTm/mjXGJsGzgypoyar1dBoww9qMVrQe xLhAbNg6YHRuenI8lv36lZTO8lRzmsy2LlW3wIcVhqY1p3JAty1KRBMgKJM0QqVfCvyY 63assMsQWpOSVkjY3aSDH9q42jhIYlofRbnqAb9GWhDgVXXhWZQn0rBivezZeY6TaI+j qNc2Mcxv68+1rnNJaGCPe12fvSXxKBFjy6xwHMdL5hCCkDMcSJ/gFNLS5dA9dG09oJ6m DUQg== X-Gm-Message-State: AIkVDXLRGg3epx0ZZtfpzk5uZI12kJmnQp3zEHlBaepUFLXpcA40fXhn09zvk+aPRDujs5Dz0ovZseDx473jFg== X-Received: by 10.200.38.200 with SMTP id 8mr7107661qtp.81.1481979382712; Sat, 17 Dec 2016 04:56:22 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.89.115 with HTTP; Sat, 17 Dec 2016 04:56:21 -0800 (PST) Received: by 10.140.89.115 with HTTP; Sat, 17 Dec 2016 04:56:21 -0800 (PST) In-Reply-To: <88A92D351643BA4CB23E30315517062662F3D4FF@SHSMSX103.ccr.corp.intel.com> References: <88A92D351643BA4CB23E30315517062662F3C939@SHSMSX103.ccr.corp.intel.com> <88A92D351643BA4CB23E30315517062662F3D4FF@SHSMSX103.ccr.corp.intel.com> From: edgar helmut Date: Sat, 17 Dec 2016 14:56:21 +0200 Message-ID: To: "Hu, Xuekun" Cc: "Wiles, Keith" , users@dpdk.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 Dec 2016 12:56:23 -0000 That's what I afraid. In fact i need the host to back the entire guest's memory with hugepages. I will find the way to do that and make the testing again. On 16 Dec 2016 3:14 AM, "Hu, Xuekun" wrote: > You said VM=E2=80=99s memory was 6G, while transparent hugepages was only= used ~4G > (4360192KB). So some were mapped to 4K pages. > > > > BTW, the memory used by transparent hugepage is not the hugepage you > reserved in kernel boot option. > > > > *From:* edgar helmut [mailto:helmut.edgar100@gmail.com] > *Sent:* Friday, December 16, 2016 1:24 AM > *To:* Hu, Xuekun > *Cc:* Wiles, Keith; users@dpdk.org > *Subject:* Re: [dpdk-users] Dpdk poor performance on virtual machine > > > > in fact the vm was created with 6G RAM, its kernel boot args are defined > with 4 hugepages of 1G each, though when starting the vm i noted that > anonhugepages increased. > > The relevant qemu process id is 6074, and the following sums the amount o= f > allocated AnonHugePages: > sudo grep -e AnonHugePages /proc/6074/smaps | awk '{ if($2>0) print $2} > '|awk '{s+=3D$1} END {print s}' > > which results with 4360192 > > so not all the memory is backed with transparent hugepages though it is > more than the amount of hugepages the guest supposed to boot with. > > How can I be sure that the required 4G hugepages are really allocated?, > and not, for example, only 2G out of the 4G are allocated (and the rest 2 > are mapping of the default 4K)? > > > > thanks > > > > On Thu, Dec 15, 2016 at 4:33 PM, Hu, Xuekun wrote: > > Are you sure the anonhugepages size was equal to the total VM's memory > size? > Sometimes, transparent huge page mechanism doesn't grantee the app is usi= ng > the real huge pages. > > > > -----Original Message----- > From: users [mailto:users-bounces@dpdk.org] On Behalf Of edgar helmut > Sent: Thursday, December 15, 2016 9:32 PM > To: Wiles, Keith > Cc: users@dpdk.org > Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine > > I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @ > 2.40GHz. > > I just made two more steps: > 1. setting iommu=3Dpt for better usage of the igb_uio > 2. using taskset and isolcpu so now it looks like the relevant dpdk cores > use dedicated cores. > > It improved the performance though I still see significant difference > between the vm and the host which I can't fully explain. > > any further idea? > > Regards, > Edgar > > > On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith > wrote: > > > > > > On Dec 15, 2016, at 1:20 AM, edgar helmut > > wrote: > > > > > > Hi. > > > Some help is needed to understand performance issue on virtual machin= e. > > > > > > Running testpmd over the host functions well (testpmd forwards 10g > > between > > > two 82599 ports). > > > However same application running on a virtual machine over same host > > > results with huge degradation in performance. > > > The testpmd then is not even able to read 100mbps from nic without > drops, > > > and from a profile i made it looks like a dpdk application runs more > than > > > 10 times slower than over host=E2=80=A6 > > > > Not sure I understand the overall setup, but did you make sure the > NIC/PCI > > bus is on the same socket as the VM. If you have multiple sockets on yo= ur > > platform. If you have to access the NIC across the QPI it could explain > > some of the performance drop. Not sure that much drop is this problem. > > > > > > > > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest. > > > Qemu is 2.3.0 (though I tried with a newer as well). > > > NICs are connected to guest using pci passthrough, and guest's cpu is > set > > > as passthrough (same as host). > > > On guest start the host allocates transparent hugepages (AnonHugePage= s) > > so > > > i assume the guest memory is backed with real hugepages on the host. > > > I tried binding with igb_uio and with uio_pci_generic but both result= s > > with > > > same performance. > > > > > > Due to the performance difference i guess i miss something. > > > > > > Please advise what may i miss here? > > > Is this a native penalty of qemu?? > > > > > > Thanks > > > Edgar > > > > Regards, > > Keith > > > > > > >