From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-f41.google.com (mail-lf0-f41.google.com [209.85.215.41]) by dpdk.org (Postfix) with ESMTP id 2C81ACF68 for ; Tue, 14 Mar 2017 18:37:19 +0100 (CET) Received: by mail-lf0-f41.google.com with SMTP id j90so79592802lfk.2 for ; Tue, 14 Mar 2017 10:37:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=subject:to:references:cc:from:organization:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=yPe2QvrmpBSZhfHpoEEGdFdQUWD93O+B+8qHGm3J1cM=; b=AlBgry11tEvw+vyG2QBcLFW2+lBK3ilfqz+t1gJ6kWcUwwcWYZo/8ZqpkINuQUW9M3 aqRAPJjshu/tp0fUJNfSwVpW/6Wg/qUgxQORG5tLEDTVm86TKyXS36deUkGP4JCGSsCW jkTLGKHEBz6pI6Ax4Aivo1Bl0IsbNRKsFJl079DRDD0EA3+M1I9tpShKq4pHZlkmMyo1 4yFGEvKUwFJxeVFO3BSrYzLUETcS5frJP6rOqZusaS1JNFVSv2UrUq0GhYkZ5huOk09P yiiXf0hBY0KBfM7PGplL4tVUxAl8pFtBbFsrZEQp6x0lf9SooN6Xc2pX/Ju687OBQpFt fRUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:cc:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-transfer-encoding; bh=yPe2QvrmpBSZhfHpoEEGdFdQUWD93O+B+8qHGm3J1cM=; b=JcnsNrSA7QGwDJBAEBj+aIg258Vqa/uol+RdeDibEc7EHwy7RDgtyEyo24fHhsifog 6YPRYCgVLzlPMpLFMDuYsscDqGKkkcKFZgAwn6KLZD62LUKsMthUb3llLJUtTzW98y7x BuGwLGywn0W/WnGCgVI5TKLdOUhIubJZKm9/7aSJGmP7bl6FGPA+hlUJ9Gio2xRkJoQ2 Szffm4z3hZrvbxsmylXdtJ6EQBwA4jKNTlw6uqxxrTndB/zM0OXzNf78S09qtDUqIVkX PfGX2Mjr0jz4WPmTgjDRXuU/mZd8dhxvy5Q5LskfL+CiK5Ht07lSW+qxpYlgYtYq+YK7 wZ4Q== X-Gm-Message-State: AMke39knnV3/uM7EV8qkbnxO1bdlOybpdOJQGuBws2II2zU7KUBs87CMUDGBoISsBC7xtcUQ X-Received: by 10.25.115.144 with SMTP id h16mr9831180lfk.46.1489513038378; Tue, 14 Mar 2017 10:37:18 -0700 (PDT) Received: from [10.20.0.49] (static-212-247-252-170.cust.tele2.se. [212.247.252.170]) by smtp.gmail.com with ESMTPSA id u128sm4247859lff.4.2017.03.14.10.37.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Mar 2017 10:37:17 -0700 (PDT) To: Allain Legacy References: <1488414008-162839-1-git-send-email-allain.legacy@windriver.com> <1489432593-32390-1-git-send-email-allain.legacy@windriver.com> Cc: ferruh.yigit@intel.com, dev@dpdk.org, ian.jolliffe@windriver.com, bruce.richardson@intel.com, john.mcnamara@intel.com, keith.wiles@intel.com, thomas.monjalon@6wind.com, jerin.jacob@caviumnetworks.com, stephen@networkplumber.org, 3chas3@gmail.com From: Vincent JARDIN Organization: www.6wind.com Message-ID: <4b3a0ff4-3d19-8e4b-0cbf-2a08e6433285@6wind.com> Date: Tue, 14 Mar 2017 18:37:07 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <1489432593-32390-1-git-send-email-allain.legacy@windriver.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v4 00/17] Wind River Systems AVP PMD vs virtio? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Mar 2017 17:37:19 -0000 Allain, see inline, + I did restore the thread from http://dpdk.org/ml/archives/dev/2017-March/060087.html into the same email. To make it short, using ivshmem, you keep people unfocused from virtio. > Vincent, > Perhaps you can help me understand why the performance or functionality of > AVP vs. Virtio is relevant to the decision of accepting this driver. There > are many drivers in the DPDK; most of which provide the same functionality > at comparable performance rates. AVP is just another such driver. The fact > that it is virtual rather than physical, in my opinion, should not influence > the decision of accepting this driver. No, it is a different logic: your driver is about Qemu's vNIC support. Other PMDs are here to support vendors and different hypervisors. For isntance, in case of vmxnet3, different versions, evolutions and optimisations can be managed inside vmxnet3. We need to avoid the proliferation of PMDs for a same hypervisor while there is already an efficient solution, virtio. > On the other hand, code > quality/complexity or lack of a maintainer are reasonable reasons for > rejecting. If our driver is accepted we are committed to maintaining it and > testing changes as required by any driver framework changes which may impact > all drivers. But, then it is unfocusing your capabilities. As a community, we need to be sure that we are all focused on improving existing solutions. Since virtio is the one, I would rather prefer to see more people working on improving the virtio's community instead of getting everyone unfocused. > > Along the same lines, I do not understand why upstreaming AVP in to the Linux > kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK. > Continuing my analogy from above, the AVP device is a commercial offering > tied to the Wind River Systems Titanium product line. It enables virtualized > DPDK applications and increases DPDK adoption. Similarly to how a driver from > company XYX is tied to a commercial NIC that must be purchased by a customer, > our AVP device is available to operators that choose to leverage our Titanium > product to implement their Cloud solutions. It is not our intention to > upstream the qemu/kvm or host vswitch portion of the AVP device. Our qemu/kvm > extensions are GPL so they are available to our customers if they desire to > rebuild qemu/kvm with their own proprietary extensions It was a solution before 2013, but now we are in 2017, vhost-user is here, show them the new proper path. Let's be focus, please. > Our AVP device was implemented in 2013 in response to the challenge of lower > than required performance of qemu/virtio in both user space and DPDK > applications in the VM. Rather than making complex changes to qemu/virtio > and continuously have to forward prop those as we upgraded to newer versions > of qemu we decided to decouple ourselves from that code base. We developed the > AVP device based on an evolution of KNI+ivshmem by enhancing both with > features that would meet the needs of our customers; better performance, > multi-queue support, live-migration support, hot-plug support. As I said in > my earlier response, since 2013, qemu/virtio has seen improved performance > with the introduction of vhost-user. The performance of vhost-user still has > not yet achieved performance levels equal to our AVP PMD. Frankly, I was on the boat than you: I did strong pitch the use of ivshmem for vNICs for many benefits. But Redhat folks did push bash to ask every one to be focused in virtio. So, back to 2013, I would have supported your AVP approach, but now we are in 2017 and we must stack focused on a single and proper qemu solution => virtio/vhost. > > I acknowledge that the AVP driver could exist as an out-of-tree driver loaded > as a shared library at runtime. In fact, 2 years ago we released our driver > source on github for this very reason. We provide instructions and support > for building the AVP PMD as a shared library. Some customers have adopted > this method while many insist on an in-tree driver for several reasons. > > Most importantly, they want to eliminate the burden of needing to build and > support an additional package into their product. An in-tree driver would > eliminate the need for a separate build/packaging process. Also, they want > an option that allows them to be able to develop directly on the bleeding > edge of DPDK rather than waiting on us to update our out-of-tree driver > based on stable releases of the DPDK. In this regard, an in-tree driver > would allow our customers to work directly on the latest DPDK. Make them move to virtio, then this pain will disappear. > An in-tree driver provides obvious benefits to our customers, but keep in > mind that this also provides a benefit to the DPDK. If a customer must > develop on a stable release because they must use an out-of-tree driver then > they are less likely to contribute fixes/enhancements/testing upstream. I > know this first hand because I work with software from different sources on > a daily basis and it is a significant burden to have to reproduce/test fixes > on master when you build/ship on an older stable release. Accepting this > driver would increase the potential pool of developers available for > contributions and reviews. So, again, you are make a call for unfocusing from virtio, please. > Again, we are committed to contributing to the DPDK community by supporting > our driver and upstreaming other fixes/enhancements we develop along the > way. We feel that if the DPDK is limited to only a single virtual driver of > any type then choice and innovation is also limited. In the end if more > variety and innovation increases DPDK adoption then this is a win for DPDK > and everyone that is involved in the project. Frankly, show/demonstrate that AVP can be faster and better in performances than virtio, then I would support you strongly, since, back in 2013 days, I did strongly believe in innovation and agility with ivshmem. But right now, you did not provide any relevant data, you just made the proof points that you are getting the community unfocused from virtio 4 years later. We are not in 2013 anymore. > This patch series submits an initial version of the AVP PMD from Wind River > Systems. The series includes shared header files, driver implementation, > and changes to documentation files in support of this new driver. The AVP > driver is a shared memory based device. It is intended to be used as a PMD > within a virtual machine running on a Wind River virtualization platform. > See: http://www.windriver.com/products/titanium-cloud/ > > It enables optimized packet throughput without requiring any packet > processing in qemu. This allowed us to provide our customers with a > significant performance increase for both DPDK and non-DPDK applications > in the VM. Since our AVP implementation supports VM live-migration it > is viewed as a better alternative to PCI passthrough or PCI SRIOV since > neither of those support VM live-migration without manual intervention > or significant performance penalties. > > Since the initial implementation of AVP devices, vhost-user has become part > of the qemu offering with a significant performance increase over the > original virtio implementation. However, vhost-user still does not achieve > the level of performance that the AVP device can provide to our customers > for DPDK based guests. > > A number of our customers have requested that we upstream the driver to > dpdk.org. > > v2: > * Fixed coding style violations that slipped in accidentally because of an > out of date checkpatch.pl from an older kernel. > > v3: > * Updated 17.05 release notes to add a section for this new PMD > * Added additional info to the AVP nic guide document to clarify the > benefit of using AVP over virtio. > * Fixed spelling error in debug log missed by local checkpatch.pl version > * Split the transmit patch to separate the stats functions as they > accidentally got squashed in the last patchset. > * Fixed debug log strings so that they exceed 80 characters rather than > span multiple lines. > * Renamed RTE_AVP_* defines that were in avp_ethdev.h to be AVP_* instead > * Replaced usage of RTE_WRITE32 and RTE_READ32 with rte_write32_relaxed > and rte_read32_relaxed. > * Declared rte_pci_id table as const > > v4: > * Split our interrupt handlers to a separate patch and moved to the end > of the series. > * Removed memset() from stats_get API > * Removed usage of RTE_AVP_ALIGNMENT > * Removed unnecessary parentheses in rte_avp_common.h > * Removed unneeded "goto unlock" where there are no statements in between > the goto and the end of the function. > * Re-tested with pktgen and found that rte_eth_tx_burst() is being called > with 0 packets even before starting traffic which resulted in > incrementing oerrors; fixed in transmit patch. > > Allain Legacy (17): > config: added attributes for the AVP PMD > net/avp: added public header files > maintainers: claim responsibility for AVP PMD > net/avp: added PMD version map file > net/avp: added log macros > drivers/net: added driver makefiles > net/avp: driver registration > net/avp: device initialization > net/avp: device configuration > net/avp: queue setup and release > net/avp: packet receive functions > net/avp: packet transmit functions > net/avp: device statistics operations > net/avp: device promiscuous functions > net/avp: device start and stop operations > net/avp: migration interrupt handling > doc: added information related to the AVP PMD > > MAINTAINERS | 6 + > config/common_base | 10 + > config/common_linuxapp | 1 + > doc/guides/nics/avp.rst | 99 ++ > doc/guides/nics/features/avp.ini | 17 + > doc/guides/nics/index.rst | 1 + > drivers/net/Makefile | 1 + > drivers/net/avp/Makefile | 61 + > drivers/net/avp/avp_ethdev.c | 2371 +++++++++++++++++++++++++++++++ > drivers/net/avp/avp_logs.h | 59 + > drivers/net/avp/rte_avp_common.h | 427 ++++++ > drivers/net/avp/rte_avp_fifo.h | 157 ++ > drivers/net/avp/rte_pmd_avp_version.map | 4 + > mk/rte.app.mk | 1 + > 14 files changed, 3215 insertions(+) > create mode 100644 doc/guides/nics/avp.rst > create mode 100644 doc/guides/nics/features/avp.ini > create mode 100644 drivers/net/avp/Makefile > create mode 100644 drivers/net/avp/avp_ethdev.c > create mode 100644 drivers/net/avp/avp_logs.h > create mode 100644 drivers/net/avp/rte_avp_common.h > create mode 100644 drivers/net/avp/rte_avp_fifo.h > create mode 100644 drivers/net/avp/rte_pmd_avp_version.map > so, still an nack because: - no performance data of avp vs virtio, - 2013 is gone, - it unfocuses from virtio. Best regards, Vincent