From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 711892C35 for ; Wed, 15 Mar 2017 05:11:07 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489551067; x=1521087067; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=3xOdNgZ3R8LIfuELiVL3NkhMeCFfnRZaIAfCT8n+lcE=; b=koqeIYQqP661jJcr8pVYjc6HttIrBFo5fXBu6AQ/20dk7tQ4/uychdm4 dJ5pvEGJMZGT8IGyHBB7fxN5KxL9tA==; Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Mar 2017 21:11:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,167,1486454400"; d="scan'208";a="1108575402" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by orsmga001.jf.intel.com with ESMTP; 14 Mar 2017 21:10:59 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.173]) by IRSMSX106.ger.corp.intel.com ([169.254.8.197]) with mapi id 14.03.0248.002; Wed, 15 Mar 2017 04:10:57 +0000 From: "O'Driscoll, Tim" To: Vincent JARDIN , "Legacy, Allain (Wind River)" CC: "Yigit, Ferruh" , "dev@dpdk.org" , "Jolliffe, Ian (Wind River)" , "Richardson, Bruce" , "Mcnamara, John" , "Wiles, Keith" , "thomas.monjalon@6wind.com" , "jerin.jacob@caviumnetworks.com" , "stephen@networkplumber.org" , "3chas3@gmail.com" <3chas3@gmail.com> Thread-Topic: [dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? Thread-Index: AQHSnOmrpqD45P09P0WmWVCyMRk+Q6GVFNRA Date: Wed, 15 Mar 2017 04:10:56 +0000 Message-ID: <26FA93C7ED1EAA44AB77D62FBE1D27BA7231E927@IRSMSX108.ger.corp.intel.com> References: <1488414008-162839-1-git-send-email-allain.legacy@windriver.com> <1489432593-32390-1-git-send-email-allain.legacy@windriver.com> <4b3a0ff4-3d19-8e4b-0cbf-2a08e6433285@6wind.com> In-Reply-To: <4b3a0ff4-3d19-8e4b-0cbf-2a08e6433285@6wind.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZjI1MDMyOWMtMzhlYS00NTBjLTk2MDktNmE1OWFlM2Y1OTBjIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjIuMTEuMCIsIlRydXN0ZWRMYWJlbEhhc2giOiIxVngrTmRwSU14VEVkN2VuSmFFdG1ic3RRS3RVQ0ZVOUZBQm04bTVJbEVRPSJ9 x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Mar 2017 04:11:08 -0000 I've included a couple of specific comments inline below, and a general com= ment here. We have somebody proposing to add a new driver to DPDK. It's standalone and= doesn't affect any of the core libraries. They're willing to maintain the = driver and have included a patch to update the maintainers file. They've al= so included the relevant documentation changes. I haven't seen any negative= comment on the patches themselves except for a request from John McNamara = for an update to the Release Notes that was addressed in a later version. I= think we should be welcoming this into DPDK rather than questioning/reject= ing it. I'd suggest that this is a good topic for the next Tech Board meeting. > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vincent JARDIN >=20 > Allain, >=20 > see inline, > + I did restore the thread from > http://dpdk.org/ml/archives/dev/2017-March/060087.html into the same > email. >=20 > To make it short, using ivshmem, you keep people unfocused from virtio. I agree with the desire to have virtio as the preferred solution. I think t= he way to do that is by promoting the benefits of a standard solution and c= ontinually improving the performance, as we are doing. I don't think it's a= reason to reject alternative solutions though. > > Vincent, > > Perhaps you can help me understand why the performance or > functionality of > > AVP vs. Virtio is relevant to the decision of accepting this driver. > There > > are many drivers in the DPDK; most of which provide the same > functionality > > at comparable performance rates. AVP is just another such driver. The > fact > > that it is virtual rather than physical, in my opinion, should not > influence > > the decision of accepting this driver. >=20 > No, it is a different logic: your driver is about Qemu's vNIC support. > Other PMDs are here to support vendors and different hypervisors. For > isntance, in case of vmxnet3, different versions, evolutions and > optimisations can be managed inside vmxnet3. We need to avoid the > proliferation of PMDs for a same hypervisor while there is already an > efficient solution, virtio. >=20 > > On the other hand, code > > quality/complexity or lack of a maintainer are reasonable reasons for > > rejecting. If our driver is accepted we are committed to maintaining > it and > > testing changes as required by any driver framework changes which may > impact > > all drivers. >=20 > But, then it is unfocusing your capabilities. As a community, we need to > be sure that we are all focused on improving existing solutions. Since > virtio is the one, I would rather prefer to see more people working on > improving the virtio's community instead of getting everyone unfocused. >=20 > > > > Along the same lines, I do not understand why upstreaming AVP in to > the Linux > > kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK. > > Continuing my analogy from above, the AVP device is a commercial > offering > > tied to the Wind River Systems Titanium product line. It enables > virtualized > > DPDK applications and increases DPDK adoption. Similarly to how a > driver from > > company XYX is tied to a commercial NIC that must be purchased by a > customer, > > our AVP device is available to operators that choose to leverage our > Titanium > > product to implement their Cloud solutions. It is not our intention to > > upstream the qemu/kvm or host vswitch portion of the AVP device. Our > qemu/kvm > > extensions are GPL so they are available to our customers if they > desire to > > rebuild qemu/kvm with their own proprietary extensions >=20 > It was a solution before 2013, but now we are in 2017, vhost-user is > here, show them the new proper path. Let's be focus, please. >=20 > > Our AVP device was implemented in 2013 in response to the challenge of > lower > > than required performance of qemu/virtio in both user space and DPDK > > applications in the VM. Rather than making complex changes to > qemu/virtio > > and continuously have to forward prop those as we upgraded to newer > versions > > of qemu we decided to decouple ourselves from that code base. We > developed the > > AVP device based on an evolution of KNI+ivshmem by enhancing both with > > features that would meet the needs of our customers; better > performance, > > multi-queue support, live-migration support, hot-plug support. As I > said in > > my earlier response, since 2013, qemu/virtio has seen improved > performance > > with the introduction of vhost-user. The performance of vhost-user > still has > > not yet achieved performance levels equal to our AVP PMD. >=20 > Frankly, I was on the boat than you: I did strong pitch the use of > ivshmem for vNICs for many benefits. But Redhat folks did push bash to > ask every one to be focused in virtio. So, back to 2013, I would have > supported your AVP approach, but now we are in 2017 and we must stack > focused on a single and proper qemu solution =3D> virtio/vhost. >=20 > > > > I acknowledge that the AVP driver could exist as an out-of-tree driver > loaded > > as a shared library at runtime. In fact, 2 years ago we released our > driver > > source on github for this very reason. We provide instructions and > support > > for building the AVP PMD as a shared library. Some customers have > adopted > > this method while many insist on an in-tree driver for several > reasons. > > > > Most importantly, they want to eliminate the burden of needing to > build and > > support an additional package into their product. An in-tree driver > would > > eliminate the need for a separate build/packaging process. Also, they > want > > an option that allows them to be able to develop directly on the > bleeding > > edge of DPDK rather than waiting on us to update our out-of-tree > driver > > based on stable releases of the DPDK. In this regard, an in-tree > driver > > would allow our customers to work directly on the latest DPDK. >=20 > Make them move to virtio, then this pain will disappear. >=20 > > An in-tree driver provides obvious benefits to our customers, but keep > in > > mind that this also provides a benefit to the DPDK. If a customer must > > develop on a stable release because they must use an out-of-tree > driver then > > they are less likely to contribute fixes/enhancements/testing > upstream. I > > know this first hand because I work with software from different > sources on > > a daily basis and it is a significant burden to have to reproduce/test > fixes > > on master when you build/ship on an older stable release. Accepting > this > > driver would increase the potential pool of developers available for > > contributions and reviews. >=20 > So, again, you are make a call for unfocusing from virtio, please. >=20 > > Again, we are committed to contributing to the DPDK community by > supporting > > our driver and upstreaming other fixes/enhancements we develop along > the > > way. We feel that if the DPDK is limited to only a single virtual > driver of > > any type then choice and innovation is also limited. In the end if > more > > variety and innovation increases DPDK adoption then this is a win for > DPDK > > and everyone that is involved in the project. >=20 > Frankly, show/demonstrate that AVP can be faster and better in > performances than virtio, then I would support you strongly, since, back > in 2013 days, I did strongly believe in innovation and agility with > ivshmem. But right now, you did not provide any relevant data, you just > made the proof points that you are getting the community unfocused from > virtio 4 years later. We are not in 2013 anymore. >=20 > > This patch series submits an initial version of the AVP PMD from Wind > River > > Systems. The series includes shared header files, driver > implementation, > > and changes to documentation files in support of this new driver. The > AVP > > driver is a shared memory based device. It is intended to be used as > a PMD > > within a virtual machine running on a Wind River virtualization > platform. > > See: http://www.windriver.com/products/titanium-cloud/ > > > > It enables optimized packet throughput without requiring any packet > > processing in qemu. This allowed us to provide our customers with a > > significant performance increase for both DPDK and non-DPDK > applications > > in the VM. Since our AVP implementation supports VM live-migration > it > > is viewed as a better alternative to PCI passthrough or PCI SRIOV > since > > neither of those support VM live-migration without manual intervention > > or significant performance penalties. > > > > Since the initial implementation of AVP devices, vhost-user has become > part > > of the qemu offering with a significant performance increase over the > > original virtio implementation. However, vhost-user still does not > achieve > > the level of performance that the AVP device can provide to our > customers > > for DPDK based guests. > > > > A number of our customers have requested that we upstream the driver > to > > dpdk.org. > > > > v2: > > * Fixed coding style violations that slipped in accidentally because > of an > > out of date checkpatch.pl from an older kernel. > > > > v3: > > * Updated 17.05 release notes to add a section for this new PMD > > * Added additional info to the AVP nic guide document to clarify the > > benefit of using AVP over virtio. > > * Fixed spelling error in debug log missed by local checkpatch.pl > version > > * Split the transmit patch to separate the stats functions as they > > accidentally got squashed in the last patchset. > > * Fixed debug log strings so that they exceed 80 characters rather > than > > span multiple lines. > > * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* > instead > > * Replaced usage of RTE_WRITE32 and RTE_READ32 with > rte_write32_relaxed > > and rte_read32_relaxed. > > * Declared rte_pci_id table as const > > > > v4: > > * Split our interrupt handlers to a separate patch and moved to the > end > > of the series. > > * Removed memset() from stats_get API > > * Removed usage of RTE_AVP_ALIGNMENT > > * Removed unnecessary parentheses in rte_avp_common.h > > * Removed unneeded "goto unlock" where there are no statements in > between > > the goto and the end of the function. > > * Re-tested with pktgen and found that rte_eth_tx_burst() is being > called > > with 0 packets even before starting traffic which resulted in > > incrementing oerrors; fixed in transmit patch. > > > > Allain Legacy (17): > > config: added attributes for the AVP PMD > > net/avp: added public header files > > maintainers: claim responsibility for AVP PMD > > net/avp: added PMD version map file > > net/avp: added log macros > > drivers/net: added driver makefiles > > net/avp: driver registration > > net/avp: device initialization > > net/avp: device configuration > > net/avp: queue setup and release > > net/avp: packet receive functions > > net/avp: packet transmit functions > > net/avp: device statistics operations > > net/avp: device promiscuous functions > > net/avp: device start and stop operations > > net/avp: migration interrupt handling > > doc: added information related to the AVP PMD > > > > MAINTAINERS | 6 + > > config/common_base | 10 + > > config/common_linuxapp | 1 + > > doc/guides/nics/avp.rst | 99 ++ > > doc/guides/nics/features/avp.ini | 17 + > > doc/guides/nics/index.rst | 1 + > > drivers/net/Makefile | 1 + > > drivers/net/avp/Makefile | 61 + > > drivers/net/avp/avp_ethdev.c | 2371 > +++++++++++++++++++++++++++++++ > > drivers/net/avp/avp_logs.h | 59 + > > drivers/net/avp/rte_avp_common.h | 427 ++++++ > > drivers/net/avp/rte_avp_fifo.h | 157 ++ > > drivers/net/avp/rte_pmd_avp_version.map | 4 + > > mk/rte.app.mk | 1 + > > 14 files changed, 3215 insertions(+) > > create mode 100644 doc/guides/nics/avp.rst > > create mode 100644 doc/guides/nics/features/avp.ini > > create mode 100644 drivers/net/avp/Makefile > > create mode 100644 drivers/net/avp/avp_ethdev.c > > create mode 100644 drivers/net/avp/avp_logs.h > > create mode 100644 drivers/net/avp/rte_avp_common.h > > create mode 100644 drivers/net/avp/rte_avp_fifo.h > > create mode 100644 drivers/net/avp/rte_pmd_avp_version.map > > >=20 > so, still an nack because: > - no performance data of avp vs virtio, I don't think it should be a requirement for Allain to provide performance = data in order to justify getting this accepted into DPDK. Keith pointed out= in a previous comment on this patch set that even if performance is the sa= me as virtio, there might still be other reasons why people would want to u= se it. > - 2013 is gone, > - it unfocuses from virtio. >=20 > Best regards, > Vincent