From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-00191d01.pphosted.com (mx0b-00191d01.pphosted.com [67.231.157.136]) by dpdk.org (Postfix) with ESMTP id CA4202C01 for ; Wed, 4 Jan 2017 22:09:38 +0100 (CET) Received: from pps.filterd (m0049459.ppops.net [127.0.0.1]) by m0049459.ppops.net-00191d01. (8.16.0.17/8.16.0.17) with SMTP id v04L4lxA026748; Wed, 4 Jan 2017 16:09:37 -0500 Received: from alpi155.enaf.aldc.att.com (sbcsmtp7.sbc.com [144.160.229.24]) by m0049459.ppops.net-00191d01. with ESMTP id 27s5e8mrr2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Jan 2017 16:09:36 -0500 Received: from enaf.aldc.att.com (localhost [127.0.0.1]) by alpi155.enaf.aldc.att.com (8.14.5/8.14.5) with ESMTP id v04L9ab4012943; Wed, 4 Jan 2017 16:09:36 -0500 Received: from mlpi409.sfdc.sbc.com (mlpi409.sfdc.sbc.com [130.9.128.241]) by alpi155.enaf.aldc.att.com (8.14.5/8.14.5) with ESMTP id v04L9XPf012911 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 4 Jan 2017 16:09:34 -0500 Received: from clpi183.sldc.sbc.com (clpi183.sldc.sbc.com [135.41.1.46]) by mlpi409.sfdc.sbc.com (RSA Interceptor); Wed, 4 Jan 2017 21:09:19 GMT Received: from sldc.sbc.com (localhost [127.0.0.1]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id v04L9JUV000765; Wed, 4 Jan 2017 15:09:19 -0600 Received: from mail-green.research.att.com (mail-green.research.att.com [135.207.255.15]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id v04L9Cv9000319; Wed, 4 Jan 2017 15:09:12 -0600 Received: from [135.197.228.11] (bd-228-11.research.att.com [135.197.228.11]) by mail-green.research.att.com (Postfix) with ESMTP id A678DE03E8; Wed, 4 Jan 2017 16:08:36 -0500 (EST) To: dev@dpdk.org References: <4341B239C0EFF9468EE453F9E9F4604D3C5CA3DA@shsmsx102.ccr.corp.intel.com> Cc: "jing.d.chen at intel.com,vincent.jardin"@6wind.com, kaustubh@research.att.com, az5157@att.com From: Scott Daniels Message-ID: <586D647A.5040607@research.att.com> Date: Wed, 4 Jan 2017 16:09:14 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <4341B239C0EFF9468EE453F9E9F4604D3C5CA3DA@shsmsx102.ccr.corp.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-RSA-Inspected: yes X-RSA-Classifications: public X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-01-04_17:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1612050000 definitions=main-1701040312 Subject: Re: [dpdk-dev] [PATCH v5 00/29] Support VFD and DPDK PF + kernel VF on i40e X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jan 2017 21:09:39 -0000 > Vincent, > > Sorry, I missed this reply. > >> >> Le 22/12/2016 à 09:10, Chen, Jing D a écrit : >> > In the meanwhile, we have some test models ongoing to validate >> > combination of Linux and DPDK drivers for VF and PF. We'll fully support >> below 4 cases going forward. >> > 1. DPDK PF + DPDK VF >> > 2. DPDK PF + Linux VF >> >> + DPDK PF + FreeBSD VF >> + DPDK PF + Windows VF >> + DPDK PF + OS xyz VF >> > > If all drivers follow same API spec, what's the problem here? > What extra DPDK PF effort you observed? > >> > 3. Linux PF + DPDK VF >> > 4. Linux PF + Linux VF (it's not our scope) >> >> So, you confirm the issue: having DPDK becoming a PF, even if SRIOV protocol >> includes version-ing, it doubles the combinatory cases. > > If extended functions are needed, the answer is yes. > That's not a big problem, right? I have several workarounds/approaches to > support extended funcs while following original API spec. I can fix it in this > release. In order to have a mature solution, I left it here for further implementation. > >> >> > >> > After applied this patch, i've done below test without observing >> compatibility issue. >> > 1. DPDK PF + DPDK VF (middle of 16.11 and 17.02 code base). PF to support >> API 1.0 while VF >> > to support API 1.1/1.0 >> > 2. DPDK PF + Linux VF 1.5.14. PF to support 1.0, while Linux to >> > support 1.1/1.0 >> > >> > Linux PF + DPDK VF has been tested with 1.0 API long time ago. There >> > is some test activities ongoing. >> > >> > Finally, please give strong reasons to support your NAC. >> >> I feel bad because I do recognize the strong and hard work that you have done >> on this PF development, but I feel we need first to assess if DPDK should >> become a PF or not. I know ixgbe did open the path and that they are some >> historical DPDK PF supports in Intel NICs, but before we generalize it, we have >> to make sure we are not turning this DataPlane development Kit into a >> ControlPlane Driver Kit that we are scared to upstream into Linux kernel. Even >> if "DPDK is not Linux", it does not mean that Linux should be ignored. In case >> of DPDK on other OS, same, their PF could be extended too. >> > > Thanks for the recognition of our work on PF driver. :) > >> So currently, yes, I do keep a nack't >> >> Since DPDK PF features can be into Linux PF features too and since Linux (and >> other hypervisors) has already some tools to manage PF (see iproute2, etc.), >> why should we have an other management path with DPDK? >> DPDK is aimed to be a Dataplane Development kit, not a management/control >> plane driver kit. > > Before we debated on Dataplane and ControPlane, can you answer me a question, > why we have generic filter API? Is it a API for dataplane? > > I can't imagine that we'll have to say 'you need to use Linux PF' driver when users > want to deploy PF + VF cases. Why we can't provide an alternative option. they are not > exclusive and users can decide which combination is better for them. > The reason why we developed DPDK PF host driver is we have requirements from > users. Our motivation is simple, there are requirements, we satisfy them. > > Sorry, you NACK can't convince me. > >> >> Assuming you want to use DPDK PF for dataplane feature, that could be OK >> then, using: >> - configure one VF on the hypervisor from Linux's PF, let's name if >> VF_forPFtraffic, see http://dpdk.org/doc/guides/howto/flow_bifurcation.html >> - have no (or few IO)s to the PF's queue >> - assign the traffic to all VF_forPFtraffic's queues of the hypervisor, >> - run DPDK into the hypervisor's VF_forPFtraffic >> >> Doing so, we get the same benefit of running DPDK over PF or running DPDK >> over VF_forPFtraffic, don't we? It is a benefit of: >> http://dpdk.org/doc/guides/howto/flow_bifurcation.html >> >> Thank you, >> Vincent >> With holidays we are a bit late with our thoughts, but would like to toss them into the mix. The original NAK is understandable, however having the ability to configure the PF via DPDK is advantageous for several reasons: 1) While some functions may be duplicated and/or available from the kernel driver, it is often not possible to introduce new kernel drivers into production without a large amount of additional testing of the entire platform which can cause a significant delay when introducing a DPDK based product. If the PF control is a part of the DPDK environment, then only the application needs to pass the operational testing before deployment; a much more simple task. 2) If the driver changes are upstreamed into the kernel proper, the difficulty of operational readiness testing increases as a new kernel is introduced, and further undermines the ability to quickly and easily release a DPDK based application into production. While the application may eventually fall back on driver and/or kernel support, this could be years away. 3) As DPDK is being used to configure the NIC, it just seems to make sense, for consistency, that the configuration capabilities should include the ability to configure the PF as is proposed. We are currently supporting/enhancing one such DPDK application to manage PF and VFs of which the VFs are exposed as SR-IOV devices to guests: https://github.com/att/vfd/wiki. As new NICs become available the ability to transition to them is important to DPDK users. Collectively, Scott Daniels, Alex Zelezniak, Kaustubh Joshi ------------------------------------------------------------------------ E. Scott Daniels Cloud Software Research AT&T Labs daniels at research.att.com