From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id D05431B172 for ; Thu, 18 Jan 2018 10:04:19 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DDA0064DF; Thu, 18 Jan 2018 09:04:18 +0000 (UTC) Received: from [10.36.112.43] (ovpn-112-43.ams2.redhat.com [10.36.112.43]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8292E60E3E; Thu, 18 Jan 2018 09:04:17 +0000 (UTC) To: "Yang, Zhiyong" , "dev@dpdk.org" , "yliu@fridaylinux.org" Cc: "Wang, Wei W" , "Tan, Jianfeng" References: <20171130094657.11470-1-zhiyong.yang@intel.com> <961a2372-39c8-70ff-41a1-5379122c0427@redhat.com> From: Maxime Coquelin Message-ID: <0371afcb-cf6c-70fe-4330-937084c78692@redhat.com> Date: Thu, 18 Jan 2018 10:04:15 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 18 Jan 2018 09:04:18 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 00/11] net/vhostpci: A new vhostpci PMD supporting VM2VM scenario X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Jan 2018 09:04:20 -0000 Hi Zhiyong, Sorry for the late reply, please find my comments inline: On 01/11/2018 12:13 PM, Yang, Zhiyong wrote: > Hi Maxime, all, > ... >>> Zhiyong Yang (11): >>> drivers/net: add vhostpci PMD base files >>> net/vhostpci: public header files >>> net/vhostpci: add debugging log macros >>> net/vhostpci: add basic framework >>> net/vhostpci: add queue setup >>> net/vhostpci: add support for link status change >>> net/vhostpci: get remote memory region and vring info >>> net/vhostpci: add RX function >>> net/vhostpci: add TX function >>> net/vhostpci: support RX/TX packets statistics >>> net/vhostpci: update release note >>> >>> MAINTAINERS | 6 + >>> config/common_base | 9 + >>> config/common_linuxapp | 1 + >>> doc/guides/rel_notes/release_18_02.rst | 6 + >>> drivers/net/Makefile | 1 + >>> drivers/net/vhostpci/Makefile | 54 + >>> drivers/net/vhostpci/rte_pmd_vhostpci_version.map | 3 + >>> drivers/net/vhostpci/vhostpci_ethdev.c | 1521 >> +++++++++++++++++++++ >>> drivers/net/vhostpci/vhostpci_ethdev.h | 176 +++ >>> drivers/net/vhostpci/vhostpci_logs.h | 69 + >>> drivers/net/vhostpci/vhostpci_net.h | 74 + >>> drivers/net/vhostpci/vhostpci_pci.c | 334 +++++ >>> drivers/net/vhostpci/vhostpci_pci.h | 240 ++++ >>> mk/rte.app.mk | 1 + >>> 14 files changed, 2495 insertions(+) >>> create mode 100644 drivers/net/vhostpci/Makefile >>> create mode 100644 >> drivers/net/vhostpci/rte_pmd_vhostpci_version.map >>> create mode 100644 drivers/net/vhostpci/vhostpci_ethdev.c >>> create mode 100644 drivers/net/vhostpci/vhostpci_ethdev.h >>> create mode 100644 drivers/net/vhostpci/vhostpci_logs.h >>> create mode 100644 drivers/net/vhostpci/vhostpci_net.h >>> create mode 100644 drivers/net/vhostpci/vhostpci_pci.c >>> create mode 100644 drivers/net/vhostpci/vhostpci_pci.h >>> >> >> Thanks for the RFC. >> It seems there is a lot of code duplication between this series and libvhost- >> user. >> >> Does the non-RFC would make reuse of libvhost-user? I'm thinking of all the >> code copied from virtio-net.c for example. >> >> If not, I think this is problematic as it will double the maintenance cost. >> > > I'm trying to reuse librte_vhost RX/TX logic and it seems feasible, > However, I have to expose many internal data structures in librte_vhost such as virtio_net, vhost_virtqueue , etc to PMD layer. I don't really like it, it looks like a layer violation. > Since vhostpci PMD is using one virtio pci device (vhostpci device) in guest, Memory allocation and release should be done in driver/net/vhostpci as virtio PMD does that. If you talk about mbuf alloc/release, then Vhost PMD also does it. So I'm not sure to get the point. > Vhostpci and vhost can share struct virtio_net to manage the different drivers, since they are very similar. > The features for example zero copy feature, make rarp packets don't need to be supported for vhostpci, we can always disable them. > How do you think about the thoughts? Why not put vhost-pci wrappers in virtio-net? Maybe TX/RX functions should be reworked to extract the common bits between vhost-user and vhost-pci, taking care of not degrading performance of vhost-user. I don't know if this is feasible, what do you think? Thanks, Maxime > thanks > Zhiyong > >> Cheers, >> Maxime > >