From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by dpdk.org (Postfix) with ESMTP id 94B6E68CB for ; Fri, 18 Jul 2014 15:26:00 +0200 (CEST) Received: by mail-vc0-f172.google.com with SMTP id im17so7430941vcb.17 for ; Fri, 18 Jul 2014 06:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=9N+a6ovnbLj6Fd58u2RG/Pr5wWNdCcNkmhOxR2ngUpw=; b=vcGyyosjOTMsV1JZ7+tbFr6/oqxZv2QkGYymoXtMS9QJSn5toRUUXHjgPPlgVntKvq oU6wqXvZaJJ6oUrkeaR8ZwGHM2VH5FEn8yUjhgEPtg8TmVeIKdyeoGl97/UB7TMyb2Jy cLY2doOc44tR4KWQ38mc1XKKRUQYh6EMcH2y/G0Y+Is+NXUVvxAXQldCmuidWiUTZFeL mM/F5QcC1m5T6Z3IFiebmQxU+MEh/IZlKOLfx3U/KfTgsZAfa5re0Bpt1rXlY0bryeW1 pHb+osGGItese89aE6N5fZihDxWoUaYSUDYNxieiBOmjX28hH/7GD4tmJ/tyArzFKuCv 6PDw== MIME-Version: 1.0 X-Received: by 10.221.63.195 with SMTP id xf3mr5899750vcb.36.1405690019318; Fri, 18 Jul 2014 06:26:59 -0700 (PDT) Received: by 10.58.186.238 with HTTP; Fri, 18 Jul 2014 06:26:59 -0700 (PDT) Date: Fri, 18 Jul 2014 09:26:59 -0400 Message-ID: From: Stefan Baranoff To: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] VMWare Performance - vmxnet3-usermap X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Jul 2014 13:26:00 -0000 All, I've been playing with DPDK recently on a variety of bare metal Linux installations and so far and have seen wonderful improvements in performance on both our Westmere and Sandy Bridge based servers. However when I install ESXi 5.1 (not linked to a vSphere management system -- stand alone ESXi installation) on one of the Westmere systems and use vmxnet3-usermap with the standard VMWare vSwitch my performance drops way down. Does anyone have a sense of pps/bps I can realistically expect to see from vmxnet3-usermap without doing SR-IOV/passthrough? Raw CentOS and RHEL 6.4 we're seeing 14.88Mpps/10Gbps but going to ESXi running CentOS 6.4 we're seeing 500Kpps/4Gbps. Is that reasonable (obviously packet rate is with small packets and data rate is with larger packets). Without going to SR-IOV is there anything that I can do to improve this performance in vmware? Also, I know SR-IOV breaks many of the HA/auto balancing features of VMware. Is the same true with vmxnet3-usermap or is that safe to use with VMs floating around a cluster willy-nilly? Thanks, Stefan