From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id E5A3D5A63 for ; Thu, 18 Aug 2016 12:36:09 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP; 18 Aug 2016 03:36:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,539,1464678000"; d="scan'208";a="1043405046" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by fmsmga002.fm.intel.com with ESMTP; 18 Aug 2016 03:36:09 -0700 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 18 Aug 2016 03:36:08 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 18 Aug 2016 03:36:08 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.181]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.107]) with mapi id 14.03.0248.002; Thu, 18 Aug 2016 18:36:06 +0800 From: "Tan, Jianfeng" To: Maxime Coquelin , Yuanhan Liu , Pankaj Chauhan CC: "dev@dpdk.org" , "hemant.agrawal@nxp.com" , "shreyansh.jain@nxp.com" Thread-Topic: [dpdk-dev] vhost [query] : support for multiple ports and non VMDQ devices in vhost switch Thread-Index: AQHR92iY7HtxA+c25UyhV2v2dB1byqBMcMYAgAAMXACAAYaMgP//z7UAgACzyyA= Date: Thu, 18 Aug 2016 10:36:06 +0000 Message-ID: References: <20160816025614.GM30752@yliu-dev.sh.intel.com> <416fbf19-0592-176f-16fa-269b28ff4585@redhat.com> In-Reply-To: <416fbf19-0592-176f-16fa-269b28ff4585@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] vhost [query] : support for multiple ports and non VMDQ devices in vhost switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 Aug 2016 10:36:10 -0000 Hi Maxime, > -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] > Sent: Thursday, August 18, 2016 3:43 PM > To: Tan, Jianfeng; Yuanhan Liu; Pankaj Chauhan > Cc: dev@dpdk.org; hemant.agrawal@nxp.com; shreyansh.jain@nxp.com > Subject: Re: [dpdk-dev] vhost [query] : support for multiple ports and no= n > VMDQ devices in vhost switch >=20 > Hi, >=20 > On 08/18/2016 04:35 AM, Tan, Jianfeng wrote: > > Hi Maxime, > > > > On 8/17/2016 7:18 PM, Maxime Coquelin wrote: > >> Hi Jianfeng, > >> > >> On 08/17/2016 04:33 AM, Tan, Jianfeng wrote: > >>> Hi, > >>> > >>> Please review below proposal of Pankaj and myself after an offline > >>> discussion. (Pankaj, please correct me if I'm going somewhere wrong). > >>> > >>> a. Remove HW dependent option, --strip-vlan, because different kinds > of > >>> NICs behave differently. It's a bug fix. > >>> b. Abstract switching logic into a framework, so that we can develop > >>> different kinds of switching logics. In this phase, we will have two > >>> switching logics: (1) a simple software-based mac learning switching; > >>> (2) VMDQ based switching. Any other advanced switching logics can be > >>> proposed based on this framework. > >>> c. Merge tep_termination example vxlan as a switching logic of the > >>> framework. > >> > >> I was also thinking of making physical port optional and add MAC > >> learning, > >> so this is all good for me. > > > > To make it clear, we are not proposing to eliminate physical port, > > instead, we just eliminate the binding of VMDQ and virtio ports, > > superseding it with a MAC learning switching. >=20 > So you confirm we could have setup with only VMs, and no physical > NIC? That's what I meant when saying "making physical port optional". Yes, this case would be supported too. >=20 > > > >> > >> Let me know if I can help in implementation, I'll be happy to > >> contribute. > > > > Thank you for participating. Currently, I'm working on item a (will be = a > > quick and simple fix). Pankaj is working on item b (which would be a > > huge change). Item c is depending on item b. So let's wait RFC patch > > from Pankaj and see what we can help. >=20 > Good, let's wait for Pankaj's RFC. >=20 > > > >> > >>> To be decided: > >>> d. Support multiple physical ports. > >>> e. Keep the current way to use vhost lib directly or use vhost pmd > >>> instead. > >> Do you see advantages of using vhost lib directly vs. pmd? > >> Wouldn't using vhost pmd make achieving zero-copy harder? > >> (I'm not sure, I didn't investigate the topic much for now). > > > > Yes, by using vhost lib, we can add back the removed feature zero-copy. > > But my understanding is zero-copy (nic-to-vm or vm-to-nic) or delayed > > copy (vm-to-vm) would be great and common features, which should be > > integrated into vhost lib and enabled in vhost pmd, so that all > > applications can benefit from it. And in fact, Yuanhan is working on th= e > > delayed copy now. An exception is rx-side-zero-copy, I don't know if > > it's common enough to be integrated in a vhost lib, because it'll > > require hardware queue binding. >=20 > Ok, I'm interrested in knowing how vm-to-vm delayed copy will be > implemented. >=20 > > Besides, vhost pmd would be easier to use than vhost lib (personal > > opinion). Secondly, vhost pmd would be more clear in logic, 1:1:1 > > mapping among vhost port, unix socket path, and virtio port. Thirdly, b= y > > using vhost pmd, we can treat vhost ports the way of physical ports, > > otherwise, we use different API to receive/transmit packets. >=20 > I'm 100% aligned with you on this, the vhost pmd makes things more > standard, so more flexible. >=20 > >> > >> Also, if we use pmd directly, then it would no more be a vhost switch > >> only, as it could potentially be used with physical NICs also. > > > > You mean we are building a switch instead of vhost switch? Yes, a switc= h > > can switch packets between virtio-virtio and virtio-physical nic. >=20 > And physical-physical also, as we will be standard API with the > vhost-pmd, nothing will prevent using it with only physical switches, > no? Oh yes, I agree. Thanks, Jianfeng >=20 > Thanks, > Maxime >=20 > > > > Thanks, > > Jianfeng > > > >> > >> Any thoughts? > >> > >> Thanks, > >> Maxime > >