From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wj0-f193.google.com (mail-wj0-f193.google.com [209.85.210.193]) by dpdk.org (Postfix) with ESMTP id B8D262A5D for ; Thu, 15 Dec 2016 18:29:55 +0100 (CET) Received: by mail-wj0-f193.google.com with SMTP id kp2so10907331wjc.0 for ; Thu, 15 Dec 2016 09:29:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=svoVM2GyduHhn4bssdY5CZ4ZYGIbf8oBLNkRAopLfIo=; b=X7wZFhhj7qIaWtTB2Vpqt5UuTn5hM4Pq8brDJ1LbAUCoULvB5+pc/2DwSvOvsv4xzK ROTXFOMGCjsSoQljQBlZSQogdvbbrsNu7uYWgQyMqxQHESoelahB61gnxummpNGACwiD FfRLQxWo/Hp/vpteFdlhTkIFtE+81dDWoPwo2iRWHVCZRYbECFGnty6srXXgsI+xHEZ4 SYPQx3f7bvYLhmTUHmwifLnqgIX9n8pWrrODCLUhAJnDpaIcuDZ8ubZgnlIcKFOxVmG6 4Qh4lmAtQ+XI4VUO+fH6ja8ijX17xYH0zrsVogi4XO2fiztjXBSj6EB7FckFApTxaRQS mdOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=svoVM2GyduHhn4bssdY5CZ4ZYGIbf8oBLNkRAopLfIo=; b=En640q47A4HgfYHQEGPagSIKKezCfeRYMAFObfrliP6OkrpjAJxA35ssbfx/7ou4mc 8dQQZ/Cg6Hu85xd3BTSgc3ge28DYwwCDSYRm7u5M8XBwEQcNz6mVei64NGRFvPpkNdeX G0wHi99YadoiJ5FiYzLleKc/T40UGAbnYEIIK4S5jX742SOFP6uPVhz1CVpfxdoWdsam VJLTvF+jnaduQ5kWVQG9xUu0rWlyzbKIapSIxaKksZjgH0PrWmIcBDvlUUgm2TZutPs1 B7FgGDLnB+L9K1eU3II0fRGUDnkyqUF8pnwRD3TVuAMulJr8gubqF1egvmTgjNjO32wn FQuQ== X-Gm-Message-State: AKaTC02ZLmRWDxgbZ8bBBJ6Ep+GRDEnrOF/ioZNuhxlS1+Io90gecwNyOphBPOI2zbCd+FijxXDdcGWVU7/FMg== X-Received: by 10.195.18.201 with SMTP id go9mr2563399wjd.200.1481822995401; Thu, 15 Dec 2016 09:29:55 -0800 (PST) MIME-Version: 1.0 Received: by 10.80.175.70 with HTTP; Thu, 15 Dec 2016 09:29:54 -0800 (PST) In-Reply-To: <20161215091740.0d34defe@xeon-e3> References: <88A92D351643BA4CB23E30315517062662F3C939@SHSMSX103.ccr.corp.intel.com> <20161215091740.0d34defe@xeon-e3> From: edgar helmut Date: Thu, 15 Dec 2016 19:29:54 +0200 Message-ID: To: Stephen Hemminger Cc: "Hu, Xuekun" , "Wiles, Keith" , "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Dec 2016 17:29:56 -0000 Stephen, this is not the case, it relies on using the transparent hugepages which looks like 2M hugepages size. Why should be a problem to back 1G pages of the guest to 2M pages at the host? the transparent hugepages makes the deployment much more flexible. On Thu, Dec 15, 2016 at 7:17 PM, Stephen Hemminger < stephen@networkplumber.org> wrote: > On Thu, 15 Dec 2016 14:33:25 +0000 > "Hu, Xuekun" wrote: > > > Are you sure the anonhugepages size was equal to the total VM's memory > size? > > Sometimes, transparent huge page mechanism doesn't grantee the app is > using > > the real huge pages. > > > > > > -----Original Message----- > > From: users [mailto:users-bounces@dpdk.org] On Behalf Of edgar helmut > > Sent: Thursday, December 15, 2016 9:32 PM > > To: Wiles, Keith > > Cc: users@dpdk.org > > Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine > > > > I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @ > 2.40GHz. > > > > I just made two more steps: > > 1. setting iommu=3Dpt for better usage of the igb_uio > > 2. using taskset and isolcpu so now it looks like the relevant dpdk cor= es > > use dedicated cores. > > > > It improved the performance though I still see significant difference > > between the vm and the host which I can't fully explain. > > > > any further idea? > > > > Regards, > > Edgar > > > > > > On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith > wrote: > > > > > > > > > On Dec 15, 2016, at 1:20 AM, edgar helmut > > > > wrote: > > > > > > > > Hi. > > > > Some help is needed to understand performance issue on virtual > machine. > > > > > > > > Running testpmd over the host functions well (testpmd forwards 10g > > > between > > > > two 82599 ports). > > > > However same application running on a virtual machine over same hos= t > > > > results with huge degradation in performance. > > > > The testpmd then is not even able to read 100mbps from nic without > drops, > > > > and from a profile i made it looks like a dpdk application runs mor= e > than > > > > 10 times slower than over host=E2=80=A6 > > > > > > Not sure I understand the overall setup, but did you make sure the > NIC/PCI > > > bus is on the same socket as the VM. If you have multiple sockets on > your > > > platform. If you have to access the NIC across the QPI it could expla= in > > > some of the performance drop. Not sure that much drop is this problem= . > > > > > > > > > > > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest. > > > > Qemu is 2.3.0 (though I tried with a newer as well). > > > > NICs are connected to guest using pci passthrough, and guest's cpu > is set > > > > as passthrough (same as host). > > > > On guest start the host allocates transparent hugepages > (AnonHugePages) > > > so > > > > i assume the guest memory is backed with real hugepages on the host= . > > > > I tried binding with igb_uio and with uio_pci_generic but both > results > > > with > > > > same performance. > > > > > > > > Due to the performance difference i guess i miss something. > > > > > > > > Please advise what may i miss here? > > > > Is this a native penalty of qemu?? > > > > > > > > Thanks > > > > Edgar > > > > > > Regards, > > > Keith > > > > > > > > Also make sure you run host with 1G hugepages and run guest in hugepage > memory. If not, the IOMMU has to do 4K operations and thrashes. >