From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f42.google.com (mail-wm0-f42.google.com [74.125.82.42]) by dpdk.org (Postfix) with ESMTP id 342E8160 for ; Mon, 18 Dec 2017 17:46:32 +0100 (CET) Received: by mail-wm0-f42.google.com with SMTP id 64so30442198wme.3 for ; Mon, 18 Dec 2017 08:46:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Kgmqw0+y/8fJKckRMETQKpjlIOHiliRO+kNcFEVbtWw=; b=MJjn96Eq1KXOyNx8hZVMtTseKJQjLEQeltq8JV3V6iKeM9XHz6JPzqVLnWYmLd20/m Pp6u03LS1WXqoqu7oJyeJovBxZ35ABjdk1MdY2dtT9VwujXzH6DOSYTHwaaTO0za/oXG FfvrcTfYVYhN8sTcCmXU5WzpaC9Snb5IH6Ug5fmXYEatcKdHJsffvEpBvTFI04r1mqeO xvIhVsV7J9z7ht+2KYy37XJ9/Rv83iDufnYE2bMGbUzUhjiaHBTH6v3WmvVyXlQe8v+3 UUEbuoXnBhD70Z7Hzbrt1Syhokqr2MPIeSgl8WH3VPxUyehrIJzGwPcH0NUtyObBH+j8 GCog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Kgmqw0+y/8fJKckRMETQKpjlIOHiliRO+kNcFEVbtWw=; b=ALFORLn066X1yHUD2UOyLj9abjJ7Grdk9PTRrBHZi4B6jmcwyrT4HlZzlk//tL5gEL VDaMi4GTqI4o0wV/86UkVTzyQgp1ngTiVz1wsr6kIvMSiQB5anicPkoIMlRwIk7+sVxP QGTTY2TOramdaF9A+5zNfbqj4QTVfnIVhFOqChcaEIo6OqO+51coVOXWPPWngc18qEo7 n+K//PHFAwIV9S3L/3ZgGtRnd7IAIK+wYF/CPTyETc9sTC2pLshfjabvRFnWacCRuxNi HlH5gyhcIkoXe8xoorUoIWq/nF6GGDbSYsGoPMFjmvkS9jMMTQrPnXZLGSGEVsQKPt1N bzWw== X-Gm-Message-State: AKGB3mJfd4NKKKWhAnsu39Lur7oBPW/7LSa0wI1rffcngxWzxyWjXJ5a 9b1iwidy/vhGNj0IU3+Da0arDA== X-Google-Smtp-Source: ACJfBovFIVwKeOYP39LtgE5Fo/Hfe3i2xEOFiDCM4oZen+1I4Domdjrn7cmjSALOHgiKRSrKC5tbIw== X-Received: by 10.28.16.144 with SMTP id 138mr270221wmq.155.1513615591863; Mon, 18 Dec 2017 08:46:31 -0800 (PST) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id a16sm15747173wrc.7.2017.12.18.08.46.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 Dec 2017 08:46:30 -0800 (PST) Date: Mon, 18 Dec 2017 17:46:19 +0100 From: Adrien Mazarguil To: Ferruh Yigit Cc: dev@dpdk.org, Stephen Hemminger Message-ID: <20171218162443.12971-1-adrien.mazarguil@6wind.com> References: <20171124172132.GW4062@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171124172132.GW4062@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH v1 0/3] Introduce virtual PMD for Hyper-V/Azure platforms X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Dec 2017 16:46:32 -0000 Virtual machines hosted by Hyper-V/Azure platforms are fitted with simplified virtual network devices named NetVSC that are used for fast communication between VM to VM, VM to hypervisor, and the outside. They appear as standard system netdevices to user-land applications, the main difference being they are implemented on top of VMBUS [1] instead of emulated PCI devices. While this reads like a case for a standard DPDK PMD, there is more to it. To accelerate outside communication, NetVSC devices as they appear in a VM can be paired with physical SR-IOV virtual function (VF) devices owned by that same VM [2]. Both netdevices share the same MAC address in that case. When paired, egress and most of the ingress traffic flow through the VF device, while part of it (e.g. multicasts, hypervisor control data) still flows through NetVSC. Moreover VF devices are not retained and disappear during VM migration; from a VM standpoint, they can be hot-plugged anytime with NetVSC acting as a fallback. Running DPDK applications in such a context involves driving VF devices using their dedicated PMDs in a vendor-independent fashion (to benefit from maximum performance without writing dedicated code) while simultaneously listening to NetVSC and handling the related hot-plug events. This new virtual PMD (referred to as "hyperv" from this point on) automatically coordinates the Hyper-V/Azure-specific management part described above by relying on vendor-specific, failsafe and tap PMDs to expose a single consolidated Ethernet device usable directly by existing applications. .------------------. | DPDK application | `--------+---------' | .------+------. | DPDK ethdev | `------+------' Control | | .------------+------------. v .------------. | failsafe PMD +---------+ hyperv PMD | `--+-------------------+--' `------------' | | | .........|......... | : | : .----+----. : .----+----. : | tap PMD | : | any PMD | : `----+----' : `----+----' : <-- Hot-pluggable | : | : .------+-------. : .-----+-----. : | NetVSC-based | : | SR-IOV VF | : | netdevice | : | device | : `--------------' : `-----------' : :.................: Note this diagram differs from that of the original RFC [3], with hyperv no longer acting as a data plane layer. This initial version of the driver only works in whitelist mode. Users have to provide the --vdev net_hyperv EAL option at least once to trigger it. Subsequent work will add support for blacklist mode based on automatic detection of the host environment. [1] http://dpdk.org/ml/archives/dev/2017-January/054165.html [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-hyper-v [3] http://dpdk.org/ml/archives/dev/2017-November/082339.html Adrien Mazarguil (3): net/hyperv: introduce MS Hyper-V platform driver net/hyperv: implement core functionality net/hyperv: add "force" parameter MAINTAINERS | 6 + config/common_base | 6 + config/common_linuxapp | 1 + doc/guides/nics/features/hyperv.ini | 12 + doc/guides/nics/hyperv.rst | 119 +++ doc/guides/nics/index.rst | 1 + drivers/net/Makefile | 1 + drivers/net/hyperv/Makefile | 58 ++ drivers/net/hyperv/hyperv.c | 799 +++++++++++++++++++++ drivers/net/hyperv/rte_pmd_hyperv_version.map | 4 + mk/rte.app.mk | 1 + 11 files changed, 1008 insertions(+) create mode 100644 doc/guides/nics/features/hyperv.ini create mode 100644 doc/guides/nics/hyperv.rst create mode 100644 drivers/net/hyperv/Makefile create mode 100644 drivers/net/hyperv/hyperv.c create mode 100644 drivers/net/hyperv/rte_pmd_hyperv_version.map -- 2.11.0