From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f67.google.com (mail-wm0-f67.google.com [74.125.82.67]) by dpdk.org (Postfix) with ESMTP id D2DAC271 for ; Mon, 18 Dec 2017 21:21:36 +0100 (CET) Received: by mail-wm0-f67.google.com with SMTP id f140so184811wmd.2 for ; Mon, 18 Dec 2017 12:21:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=rgDSx+G8ooWf1Vh5IHHZ2kGmxEvZHN3BZERijkXy2kM=; b=bC9sYq6kXL+YiCvOqp/TY4+n2+vpvu9rhymLnBaC44PUcM/S/K+5LoE0knQD4vgbtj T5iiEgEptd91Ap4oG9B5mAoWb9gTNSqPIKUrrskmnHWXLZHhB42Pw9Fw6lMSd7KEJMyn 2ponV0/InTWW8IYMPmxuqJgOS9aia9vmZfarZ+cCKXA3oFCyqvipTya6s/51fVVatC5S IZH6XfGW2fKU47WTe5Ugu/0m7uwgR2BZzhnagogJGLzZ5yB+Y4Q6a6Einq2Xj8gq6LzC m4q+4LxkjNUZMDnEytabH1i3bZa7om3fPg6tPQClbq+pp71J/6qLX1Lp1oShrbWTlh4K 1J3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=rgDSx+G8ooWf1Vh5IHHZ2kGmxEvZHN3BZERijkXy2kM=; b=Fhs4tTM0WeM/CfTdGeBFD+QQ3679Ly09DklPl/6nEdhATMiDT2oTbby0beVg7mPaUI 3dm+Y1jBiEMjF2RDYCswfTcGiZmCl2shZ+bWV8JyqDWhys90GvN3uw5Akih44zNWw8kG 5BREQCOvZI0LQJdbLWhTFkjphUI2vZW+sMAQ3ohR/d54L+YdgOhxXsk9jtygNFQpdFvc 57nFtIB8MbQH9dthZjxo1/mFbou/68U95os1WY08FzG0neQNwYizOm42NhYljV1obggA 43YFUQyix/jI6Nz+PIGcnKhj9WaoOvc6endH09B5F2DX6xuKzvIBNfG7wQdydw3E1gFo Hd1g== X-Gm-Message-State: AKGB3mK+1s+DPh0yb+PwEF3ZaDm/qopUjZ4I6/l/3fv0LhMBQSERcdRN y8fqF2MfhBG0o6Xx/zdXTAv4FhFp X-Google-Smtp-Source: ACJfBov4mBOr9AkLHDkBadfrCbQT6cMcQC2EC2vZvJXPyKqJl39YtVuFkWxoukjYHE4QMqMDnybLHw== X-Received: by 10.28.173.213 with SMTP id w204mr460144wme.126.1513628496282; Mon, 18 Dec 2017 12:21:36 -0800 (PST) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id l1sm145321wmh.13.2017.12.18.12.21.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 Dec 2017 12:21:35 -0800 (PST) Date: Mon, 18 Dec 2017 21:21:23 +0100 From: Adrien Mazarguil To: Stephen Hemminger Cc: Ferruh Yigit , dev@dpdk.org Message-ID: <20171218202123.GD4062@6wind.com> References: <20171124172132.GW4062@6wind.com> <20171218162443.12971-1-adrien.mazarguil@6wind.com> <20171218102304.66b2e98d@xeon-e3> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171218102304.66b2e98d@xeon-e3> Subject: Re: [dpdk-dev] [PATCH v1 0/3] Introduce virtual PMD for Hyper-V/Azure platforms X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Dec 2017 20:21:37 -0000 On Mon, Dec 18, 2017 at 10:23:04AM -0800, Stephen Hemminger wrote: > On Mon, 18 Dec 2017 17:46:19 +0100 > Adrien Mazarguil wrote: > > > Virtual machines hosted by Hyper-V/Azure platforms are fitted with > > simplified virtual network devices named NetVSC that are used for fast > > communication between VM to VM, VM to hypervisor, and the outside. > > > > They appear as standard system netdevices to user-land applications, the > > main difference being they are implemented on top of VMBUS [1] instead of > > emulated PCI devices. > > > > While this reads like a case for a standard DPDK PMD, there is more to it. > > > > To accelerate outside communication, NetVSC devices as they appear in a VM > > can be paired with physical SR-IOV virtual function (VF) devices owned by > > that same VM [2]. Both netdevices share the same MAC address in that case. > > > > When paired, egress and most of the ingress traffic flow through the VF > > device, while part of it (e.g. multicasts, hypervisor control data) still > > flows through NetVSC. Moreover VF devices are not retained and disappear > > during VM migration; from a VM standpoint, they can be hot-plugged anytime > > with NetVSC acting as a fallback. > > > > Running DPDK applications in such a context involves driving VF devices > > using their dedicated PMDs in a vendor-independent fashion (to benefit from > > maximum performance without writing dedicated code) while simultaneously > > listening to NetVSC and handling the related hot-plug events. > > > > This new virtual PMD (referred to as "hyperv" from this point on) > > automatically coordinates the Hyper-V/Azure-specific management part > > described above by relying on vendor-specific, failsafe and tap PMDs to > > expose a single consolidated Ethernet device usable directly by existing > > applications. > > > > .------------------. > > | DPDK application | > > `--------+---------' > > | > > .------+------. > > | DPDK ethdev | > > `------+------' Control > > | | > > .------------+------------. v .------------. > > | failsafe PMD +---------+ hyperv PMD | > > `--+-------------------+--' `------------' > > | | > > | .........|......... > > | : | : > > .----+----. : .----+----. : > > | tap PMD | : | any PMD | : > > `----+----' : `----+----' : <-- Hot-pluggable > > | : | : > > .------+-------. : .-----+-----. : > > | NetVSC-based | : | SR-IOV VF | : > > | netdevice | : | device | : > > `--------------' : `-----------' : > > :.................: > > > > Note this diagram differs from that of the original RFC [3], with hyperv no > > longer acting as a data plane layer. > > > > This initial version of the driver only works in whitelist mode. Users have > > to provide the --vdev net_hyperv EAL option at least once to trigger it. > > > > Subsequent work will add support for blacklist mode based on automatic > > detection of the host environment. > > > > [1] http://dpdk.org/ml/archives/dev/2017-January/054165.html > > [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-hyper-v > > [3] http://dpdk.org/ml/archives/dev/2017-November/082339.html > > > > Adrien Mazarguil (3): > > net/hyperv: introduce MS Hyper-V platform driver > > net/hyperv: implement core functionality > > net/hyperv: add "force" parameter > > > > MAINTAINERS | 6 + > > config/common_base | 6 + > > config/common_linuxapp | 1 + > > doc/guides/nics/features/hyperv.ini | 12 + > > doc/guides/nics/hyperv.rst | 119 +++ > > doc/guides/nics/index.rst | 1 + > > drivers/net/Makefile | 1 + > > drivers/net/hyperv/Makefile | 58 ++ > > drivers/net/hyperv/hyperv.c | 799 +++++++++++++++++++++ > > drivers/net/hyperv/rte_pmd_hyperv_version.map | 4 + > > mk/rte.app.mk | 1 + > > 11 files changed, 1008 insertions(+) > > create mode 100644 doc/guides/nics/features/hyperv.ini > > create mode 100644 doc/guides/nics/hyperv.rst > > create mode 100644 drivers/net/hyperv/Makefile > > create mode 100644 drivers/net/hyperv/hyperv.c > > create mode 100644 drivers/net/hyperv/rte_pmd_hyperv_version.map > > > > Please don't call this drivers/net/hyperv/ > that name conflicts with the real netvsc PMD that I am working on. > > Maybe vdev-netvsc? No problem with that, if vdev-netvsc is good for you, I can update it in v2 if needed. I'm just curious, I was under the impression both drivers would remain kind of complementary pending various API updates, in which case wouldn't it make sense to use "netvsc" as the better name for the NetVSC PMD? ("hyperv" being more a use case than a true PMD) Otherwise I also don't mind overwriting the current "hyperv" PMD code base with yours as soon as it's ready, this will most likely make it redundant anyway. -- Adrien Mazarguil 6WIND