From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by dpdk.org (Postfix) with ESMTP id C041F1B1BF for ; Wed, 20 Dec 2017 15:13:30 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 2090A20B2D; Wed, 20 Dec 2017 09:13:30 -0500 (EST) Received: from frontend1 ([10.202.2.160]) by compute1.internal (MEProxy); Wed, 20 Dec 2017 09:13:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=/XAZ7MGATnFL8BJg5CRVlWnSFy e244MNXAOhHTRMKDA=; b=FQYryN9D+6jR3EWOaJCZXzJMiE70ly4THsFv8wLnkE OOHHhz7c5alOqP2TlMh3Ky3bQK82lPGtuXjXpL01v1QGeuRdBjqV8xmgPfMGepSd gASG8YdiUq01nM2kK0duXgm7zXlg9XC5OpDhRA7OAh2So7roiWuShAZGeoNa/woT g= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=/XAZ7M GATnFL8BJg5CRVlWnSFye244MNXAOhHTRMKDA=; b=oxB5fRMXN4xu3Oy7gvxCgy KS74s8RvlG6Fq9f0FqLY68ETMp/cKjdSmUbSf0qMGgSCybsyMYCLV9lnrYYQM7Mk Yzv766MInTirv1OimfB0usyDoSAy9xIcMORFIjityvr0Q6hFSqIqUr3iaZsjR3Jc uF7HVYwX0H3X58run1phXxK60gtlnkNAVTJ7e/qxVBou5QZECtDrE/5jQ59lZlh8 V7bzHt7dqaUK7/UCBDp9BM2aF+rtWldUOEWttjai5Dj54ntyPQ7oXlagFN6bJ128 P1ID9DA619xy9IW069UzoBPr+xl9BFNtGqjkOgNVXVZX0rj/M58k8SlM4ZRXOOCA == X-ME-Sender: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id BAB3A7E2F4; Wed, 20 Dec 2017 09:13:29 -0500 (EST) From: Thomas Monjalon To: Ferruh Yigit , Adrien Mazarguil Cc: dev@dpdk.org, Stephen Hemminger Date: Wed, 20 Dec 2017 15:13:28 +0100 Message-ID: <3625476.Ed8aD4rtAp@xps> In-Reply-To: <1f1dbbf9-d1b4-504a-8cbd-e2717ee804d7@intel.com> References: <20171124172132.GW4062@6wind.com> <20171219150605.GG5377@6wind.com> <1f1dbbf9-d1b4-504a-8cbd-e2717ee804d7@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v1 2/3] net/hyperv: implement core functionality X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Dec 2017 14:13:31 -0000 19/12/2017 21:44, Ferruh Yigit: > On 12/19/2017 7:06 AM, Adrien Mazarguil wrote: > > On Mon, Dec 18, 2017 at 05:54:45PM -0800, Ferruh Yigit wrote: > >> On 12/18/2017 8:46 AM, Adrien Mazarguil wrote: > >>> As described in more details in the attached documentation (see patch > >>> contents), this virtual device driver manages NetVSC interfaces in virtual > >>> machines hosted by Hyper-V/Azure platforms. > >>> > >>> This driver does not manage traffic nor Ethernet devices directly; it acts > >>> as a thin configuration layer that automatically instantiates and controls > >>> fail-safe PMD instances combining tap and PCI sub-devices, so that each > >>> NetVSC interface is exposed as a single consolidated port to DPDK > >>> applications. > >>> > >>> PCI sub-devices being hot-pluggable (e.g. during VM migration), > >>> applications automatically benefit from increased throughput when present > >>> and automatic fallback on NetVSC otherwise without interruption thanks to > >>> fail-safe's hot-plug handling. > >>> > >>> Once initialized, the sole job of the hyperv driver is to regularly scan > >>> for PCI devices to associate with NetVSC interfaces and feed their > >>> addresses to corresponding fail-safe instances. > >>> > >>> Signed-off-by: Adrien Mazarguil > >> > >> <...> > >> > >>> + RTE_ETH_FOREACH_DEV(port_id) { > >> <..> > >>> + ret = rte_eal_hotplug_remove(bus->name, dev->name); > >> <..> > >>> + ret = rte_eal_hotplug_add("vdev", ctx->devname, ctx->devargs); > >> > >> Overall why this logic implemented as network PMD? > >> Yes technically you can implement *anything* as PMD :), but should we? > >> > >> This code does eal level work (scans bus, add/remove devices), and for control > >> path, and not a generic solution either (specific to netvsc and failsafe). > >> > >> Only device argument part of a PMD seems used, rest is unrelated to being a PMD. > >> Scans netvsc changes in background and reflects them into failsafe PMD... > >> > >> Why this is implemented as PMD, not another entity, like bus driver perhaps? > >> > >> Or indeed why this in DPDK instead of being in application? > > > > I'll address that last question first: the point of this driver is enabling > > existing applications to run within a Hyper-V environment unmodified, > > because they'd otherwise need to manage two driver instances correctly on > > their own in addition to hot-plug events during VM migration. > > > > Some kind of driver generating a front end to what otherwise appears as two > > distinct ethdev to applications is therefore necessary. > > > > Currently without it, users have to manually configure failsafe properly for > > each NetVSC interface on their system. Besides the inconvenience, it's not > > even a possibility with DPDK applications that don't rely on EAL > > command-line arguments. > > > > As such it's more correctly defined as a "platform" driver rather than a > > true PMD. It leaves VF device handling to their respective PMDs while > > automatically managing the platform-specific part itself. There's no simpler > > alternative when running in blacklist mode (i.e. not specifying any device > > parameters on the command line). > > > > Regarding its presence in drivers/net rather than drivers/bus, the end > > result from an application standpoint is that each instance exposes a single > > ethdev, even if not its own (failsafe's). Busses don't do that. It also > > allows passing arguments to individual devices through --vdev if needed. > > > > You're right about putting device detection at the bus level though, and I > > think there's work in progress to do just that, this driver will be updated > > to benefit from it once applied. In the meantime, the code as submitted > > works fine with the current DPDK code base and addresses an existing use > > case for which there is no solution at this point. > > This may be working but this looks like a hack to me. > > If we need a platform driver why not properly work on it. If we need to improve > eal hotplug, this is a good motivation to improve it. I agree this code looks to be a platform driver. It is the first one of this kind. Usually, things are managed either in a device driver, a bus driver, or in EAL. I also agree that hotplug should be managed in EAL and bus drivers. > And if this logic needs to be in application let it be, your argument is to not > change the existing application but this logic may lead implementing many > unrelated things as PMD to not change application, what is the line here. The line is hardware management. The application should not have to implement device-specific or platform-specific code. The same application should be able to work on any platform. > What is the work in progress, exact list, that will replace this solution? If > this hackish solution will prevent that real work, I am against this solution. > Is there a way to ensure this will be a temporary solution and that real work > will happen? I think we should explicitly mark this code as temporary, or use the EXPERIMENTAL tag. It should motivate us to implement what is needed to completely remove this code later. About the work in progress: - When hotplug will be fully supported in EAL and bus drivers, the scan part of this platform driver should be removed. - When ethdev probe notifications will be integrated, it may also clean a part of this code. - We may also think how the future port ownership can improve the behaviour of this driver. - NetVSC is currently supported by the TAP PMD, but it may be replaced by a new NetVSC PMD (VMBUS driver is already sent). - We should also continue the work on the configuration file. Such user configuration may help for platform behaviours. As a conclusion, there are a lot of improvements in progress, and I am really happy to see Hyper-V supported in DPDK. I think this driver must be only a step towards a first class support, like KVM/Qemu/vhost/virtio. As there is no API implied here, I am OK to progress step by step.