From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 1437BB116 for ; Fri, 13 Jun 2014 17:11:52 +0200 (CEST) Received: from hmsreliant.think-freely.org ([2001:470:8:a08:7aac:c0ff:fec2:933b] helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1WvT93-0006n1-5F; Fri, 13 Jun 2014 11:12:01 -0400 Date: Fri, 13 Jun 2014 11:11:56 -0400 From: Neil Horman To: "Doherty, Declan" Message-ID: <20140613151156.GA22451@hmsreliant.think-freely.org> References: <20140605110340.GB20841@hmsreliant.think-freely.org> <345C63BAECC1AD42A2EC8C63AFFC3ADC13D37331@IRSMSX101.ger.corp.intel.com> <20140606145426.GA2543@hmsreliant.think-freely.org> <345C63BAECC1AD42A2EC8C63AFFC3ADC13D38DBF@IRSMSX101.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <345C63BAECC1AD42A2EC8C63AFFC3ADC13D38DBF@IRSMSX101.ger.corp.intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v2 0/4] Link Bonding Library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Jun 2014 15:11:52 -0000 On Fri, Jun 13, 2014 at 02:56:47PM +0000, Doherty, Declan wrote: > > -----Original Message----- > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > Sent: Friday, June 6, 2014 3:54 PM > > To: Doherty, Declan > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH v2 0/4] Link Bonding Library > > > > On Fri, Jun 06, 2014 at 08:23:31AM +0000, Doherty, Declan wrote: > > > > -----Original Message----- > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > > Sent: Thursday, May 29, 2014 12:34 PM > > > > To: Doherty, Declan > > > > Cc: dev@dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] Link Bonding Library > > > > > > > > On Thu, May 29, 2014 at 10:33:00AM +0000, Doherty, Declan wrote: > > > > > -----Original Message----- > > > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > > > > Sent: Wednesday, May 28, 2014 6:49 PM > > > > > > To: Doherty, Declan > > > > > > Cc: dev@dpdk.org > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] Link Bonding Library > > > > > > > > > > > > On Wed, May 28, 2014 at 04:32:00PM +0100, declan.doherty@intel.com > > > > wrote: > > > > > > > From: Declan Doherty > > > > > > > > > > > > > > Initial release of Link Bonding Library (lib/librte_bond) with > > > > > > > support for bonding modes : > > > > > > > 0 - Round Robin > > > > > > > 1 - Active Backup > > > > > > > 2 - Balance l2 / l23 / l34 > > > > > > > 3 - Broadcast > > > > > > > > > > > > > Why make this a separate library? That requires exposure of yet > > > > > > another > > > > API to applications. Instead, why > not write a PMD that can enslave > > > > other PMD's and treat them all as a single interface? That way this > > > > all > > works with the existing API. > > > > > > > > > > > > Neil > > > > > > > > > > Hi Neil, > > > > > the link bonding device is essentially a software PMD, and as such > > > > > supports > > > > all the standard PMD APIs, the only new APIs which the link bonding > > > > library introduces are for the control operations of the bonded > > > > device which are currently unsupported by the standard PMD API. > > > > Operations such as creating, adding/removing slaves, and configuring > > > > the modes of operation of the device have no analogous APIs in the > > > > current PMD API and required new ones to be created . > > > > > > > > Thats really only true in spirit, in the sense that this library > > > > transmits and receives frames like a PMD does. In practice it doesn't > > > > work and isn't architected the same way. You don't register the > > > > driver using the same method as the other PMDs, which means that using > > > > --vdev on the command line wont work for this type of device. It also > > > > implies that applications have to be made specifically aware of the > > > > fact that they are using a bonded interface (i.e. they need to call > > > > the bonding setup routines to create the bond). I would > > > > recommend: > > > > > > > > 1) Register the pmd using the PMD_DRIVER_REGISTER macro, like other > > > > PMD's > > > > 2) Use the kvargs library to support configuration via the --vdev > > > > command line option, so bonds can be created administratively, rather > > > > than just programatically > > > > 3) Separate the command api from the PMD functionality into separate > > > > libraries (use control mbufs to communicate configuration changes to > > > > the pmd). This will allow users to dynamically load the pmd > > > > functionality (without compile or run time linking requirements), and > > > > then optionally use the programatic interface (or not if they want to > > > > save memory) > > > > > > > > Regards > > > > Neil > > > > -----Original Message----- > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > > Sent: Thursday, June 5, 2014 12:04 PM > > > > To: Doherty, Declan > > > > Cc: dev@dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH v2 0/4] Link Bonding Library > > > > > > > > On Wed, Jun 04, 2014 at 04:18:01PM +0100, declan.doherty@intel.com > > > > wrote: > > > > > From: Declan Doherty > > > > > > > > > > v2 patch additions, > > > > > fix for tx burst broadcast, incrementing the reference count on each > > > > > mbuf by the number of slaves - 1 add/remove slave behavior chnange > > > > > to fix primary slave port assignment patchcheck code fixes > > > > > > > > > > > > > > > > > > > This doesn't address any of the comments I made previously. > > > > Neil > > > > > > > > > Hi Neil, > > > > > > sorry for not replying regarding this earlier, in relation to your > > recommendations 1 and 2, I'm currently working on the implementation of both > > and should have an additional patch within the next couple of days to add this > > functionality. > > > > > > Regarding your third recommendation, I have no problem splitting the library > > into separate API and PMD libraries although this does seem to break convention > > with the other non hw based pmd's, such as pmd_pcap, > > I'm not at all sure what you mean here. pmd_pcap registers a driver with the > > registration mechanism, allowing for the use of the --vdev option on the command > > line, same as all the other virtual drivers. The physical drivers do the same > > thing using the --whitelist and --blacklist options. How is the bonding driver > > following convention with pmd_pcap? > > > > > > > so would prefer not to go down that route. Also, I don't see the ability to > > dynamically load the pmd as justification for the use of a control mbufs interface, > > this would require that either additional queues be created to handle control > > messages, or that the data TX queues have the ability to identify and process > > control mbufs, and either of these options will have a performance hit, > > > > Well, you're correct in that the driver would have to identify control messages, > > but I don't see how thats a performance hit at all. If you create a control > > channel queue, then all you have to introduce to the datapath is a single branch > > to see which queue is transmitting, and your done. If control mbufs are just > > that hurtful to performance, it seems thats an area that really needs > > improvement then. > > > > Another alternative is to add a control packet method to the rte_eth_dev > > structure allowing you to pass control mbufs directly to a pmd out of line with > > the datapath. That could avoid any additional branches in the datapath, and > > still allow you to separate user api from driver implementation. > > > > > > > for the core handling the separate control messages or directly to the tx burst > > performance. I am not currently aware of any functional requirements or use > > cases for bonded devices to be used over multiple processes which is the only > > reason I can see for using an interface based on control mbufs, > > > > The reason is administrative. Its got nothing to do with how many processes > > want to use an interface, it has to do with an interface knowing if its using > > the bonding driver or not (it shouldn't ever have to, but with your patch set, > > its required to use the bonding api to create the interface and add slaves. > > > > > and by providing a --vdev configuration options the user can still write their > > applications without any knowledge of the additional link bonding programmatic > > interface. If in the future a use case does emerge which requires multi-process > > control of bonded devices, I think it would make sense then to add an additional > > library to provide this functionality. > > > > How exactly does that happen? At what point does the bonding library in its > > current form register a driver with rte_ethdev without the application first > > calling some function in the bonding library to do so. Because until that > > happens, --vdev is non functional for this pmd. > > > > FWIW, the concern for me here is one of packaging. I don't want users to have > > to pull in a library that they don't absolutely need. the way you have this put > > together right now, they need the entire bonding library even if they just want > > a statically configured bond interface > > > > Neil > > > > > > > > > > Regards > > > Declan > > > -------------------------------------------------------------- > > Hi Neil, > I waited to reply to this until I had v3 of the patch ready to submit. I think you mistook the intent of my last > response, but anyway I think v3 of the patch addresses the majority of your concerns regarding the bonded device. Which > can now be initialized using the --vdev options and supports both virtual and physical devices to be specified > as slaves and no knowledge of the bonding API is required by the user application if the bonding ports are configure at > runtime. > Sounds great, I look forward to seeing it. Thanks! Neil > Regards >