From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f46.google.com (mail-wm0-f46.google.com [74.125.82.46]) by dpdk.org (Postfix) with ESMTP id 9B9092C0C for ; Wed, 3 Aug 2016 16:30:56 +0200 (CEST) Received: by mail-wm0-f46.google.com with SMTP id i5so339145084wmg.0 for ; Wed, 03 Aug 2016 07:30:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to; bh=uy/lLaYFQYAKUojrxhdgk2YTjdGYlRmJwUFyGz5tPwc=; b=yQ4fDwvJTHF/3tB3qaR55dNat28jCIWCH6d+qCboV62qyVia/1ffA9hgiRHQPWX4c3 fRg3pcDhGhdWvScWDttjp0LVg/1ZWpsRIRnpKI8DS08XdH5MN4hAc/T9tGMZNnqkZpCK e3DduQ2bnMk2l161canGt4LuBnqIqDR9g+B5XBskuMCiz/wOW4uj4Fjf0tACMHbVXznB EZB3ZHsRXzV6OWqlI/Xev3oNmtMRylLRE9hthbE/9HVGn97jcb6s3/fwh5AyTvY+ZjbO gL3assTj2nrU0ajF6JN1gpyo/4C2dHa3u2NDGrXy8F8lZmhwUQEN9OKcnz2OaQWcGksn pNpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to; bh=uy/lLaYFQYAKUojrxhdgk2YTjdGYlRmJwUFyGz5tPwc=; b=HV3NNIdjWEoByxHj9CRXV1hTv8Hr+4FHXBsoex+2CUyrb2xBy/SzIBJkqe8bkLkxqf Ps1+Tw+RcHHPe6AM8FWKB7rZw8c2XCGVupeDwjsRfCLVtl7hQEWzVBQYm5pwZcv01Jgz H3uTc4EidLeBQExTzZ/UrOMg5TQeUtnAoQFo9gZZzjblHTheOVr9All/26LKYYOsi1AM xYPBIkI5ZKc2glLOTEGu/+R7A6AEgRzmpQtQnPZjt53wZPqhyWpOORSiXT9lwOMP9GBJ fY9NKLesVjv50qkDD1Nf9o4Gv/sgGiVVMr1Uw2bXyiLQdb1EBXMM6nDsiMDFp0yz6yXR STpQ== X-Gm-Message-State: AEkoouuO7fmn08DlCef8fIjLInSEf8Y3ySqZ29js0djSBc676YpThPvwpSzFwIahzpd6uBHh X-Received: by 10.194.18.198 with SMTP id y6mr66236635wjd.87.1470234656067; Wed, 03 Aug 2016 07:30:56 -0700 (PDT) Received: from 6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id b186sm8406776wmg.23.2016.08.03.07.30.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Aug 2016 07:30:54 -0700 (PDT) Date: Wed, 3 Aug 2016 16:30:49 +0200 From: Adrien Mazarguil To: John Fastabend Cc: Jerin Jacob , dev@dpdk.org, Thomas Monjalon , Helin Zhang , Jingjing Wu , Rasesh Mody , Ajit Khaparde , Rahul Lakkireddy , Wenzhuo Lu , Jan Medala , John Daley , Jing Chen , Konstantin Ananyev , Matej Vido , Alejandro Lucero , Sony Chacko , Pablo de Lara , Olga Shern Message-ID: <20160803143049.GF3336@6wind.com> Mail-Followup-To: John Fastabend , Jerin Jacob , dev@dpdk.org, Thomas Monjalon , Helin Zhang , Jingjing Wu , Rasesh Mody , Ajit Khaparde , Rahul Lakkireddy , Wenzhuo Lu , Jan Medala , John Daley , Jing Chen , Konstantin Ananyev , Matej Vido , Alejandro Lucero , Sony Chacko , Pablo de Lara , Olga Shern References: <20160705181646.GO7621@6wind.com> <20160711104141.GA10172@localhost.localdomain> <20160721192023.GU7621@6wind.com> <5793DD3E.3080605@gmail.com> <57A0E423.2030804@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57A0E423.2030804@gmail.com> Subject: Re: [dpdk-dev] [RFC] Generic flow director/filtering/classification API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Aug 2016 14:30:56 -0000 Hi John, I'm replying below to both messages. On Tue, Aug 02, 2016 at 11:19:15AM -0700, John Fastabend wrote: > On 16-07-23 02:10 PM, John Fastabend wrote: > > On 16-07-21 12:20 PM, Adrien Mazarguil wrote: > >> Hi Jerin, > >> > >> Sorry, looks like I missed your reply. Please see below. > >> > > > > Hi Adrian, > > > > Sorry for a bit delay but a few comments that may be worth considering. > > > > To start with completely agree on the general problem statement and the > > nice summary of all the current models. Also good start on this. Thanks. > >> Considering that allowed pattern/actions combinations cannot be known in > >> advance and would result in an unpractically large number of capabilities to > >> expose, a method is provided to validate a given rule from the current > >> device configuration state without actually adding it (akin to a "dry run" > >> mode). > > > > Rather than have a query/validate process why did we jump over having an > > intermediate representation of the capabilities? Here you state it is > > unpractical but we know how to represent parse graphs and the drivers > > could report their supported parse graph via a single query to a middle > > layer. > > > > This will actually reduce the msg chatter imagine many applications at > > init time or in boundary cases where a large set of applications come > > online at once and start banging on the interface all at once seems less > > than ideal. Well, I also thought about a kind of graph to represent capabilities but feared the extra complexity would not be worth the trouble, thus settled on the query idea. A couple more reasons: - Capabilities evolve at the same time as devices are configured. For example, if a device supports a single RSS context, then a single rule with a RSS action may be created. The graph would have to be rewritten accordingly and thus queried/parsed again by the application. - Expressing capabilities at bit granularity (say, for a matching pattern item mask) is complex, there is no way to simplify the representation of capabilities without either losing information or making the graph more complex to parse than simply providing a flow rule from an application point of view. With that in mind, I am not opposed to the idea, both methods could even coexist, with the query function eventually evolving to become a front-end to a capability graph. Just remember that I am only defining the fundamentals for the initial implementation, i.e. how rules are expressed as patterns/actions and the basic functions to manage them, ideally without having to redefine them ever. > A bit more details on possible interface for capabilities query, > > One way I've used to describe these graphs from driver to software > stacks is to use a set of structures to build the graph. For fixed > graphs this could just be *.h file for programmable hardware (typically > coming from fw update on nics) the driver can read the parser details > out of firmware and render the structures. I understand, however I think this approach may be too low-level to express all the possible combinations. This graph would have to include possible actions for each possible pattern, all while considering that some actions are not possible with some patterns and that there are exclusive actions. Also while memory consumption is not really an issue, such a graph may be huge. It could take a while for the PMD to update it when adding a rule impacting capabilities. > I've done this two ways: one is to define all the fields in their > own structures using something like, > > struct field { > char *name; > u32 uid; > u32 bitwidth; > }; > > This gives a unique id (uid) for each field along with its > width and a user friendly name. The fields are organized into > headers via a header structure, > > struct header_node { > char *name; > u32 uid; > u32 *fields; > struct parse_graph *jump; > }; > > Each node has a unique id and then a list of fields. Where 'fields' > is a list of uid's of fields its also easy enough to embed the field > struct in the header_node if that is simpler its really a style > question. > > The 'struct parse_graph' gives the list of edges from this header node > to other header nodes. Using a parse graph structure defined > > struct parse_graph { > struct field_reference ref; > __u32 jump_uid; > }; > > Again as a matter of style you can embed the parse graph in the header > node as I did above or do it as its own object. > > The field_reference noted below gives the id of the field and the value > e.g. the tuple (ipv4.protocol, 6) then jump_uid would be the uid of TCP. > > struct field_reference { > __u32 header_uid; > __u32 field_uid; > __u32 mask_type; > __u32 type; > __u8 *value; > __u8 *mask; > }; > > The cost doing all this is some additional overhead at init time. But > building generic function over this and having a set of predefined > uids for well-known protocols such ip, udp, tcp, etc helps. What you > get for the cost is a few things that I think are worth it. (i) Now > new protocols can be added/removed without recompiling DPDK (ii) a > software package can use the capability query to verify the required > protocols are off-loadable vs a possibly large set of test queries and > (iii) when we do the programming of the device we can provide a tuple > (table-uid, header-uid, field-uid, value, mask, priority) and the > middle layer "knowing" the above graph can verify the command so > drivers only ever see "good" commands, (iv) finally it should be > faster in terms of cmds per second because the drivers can map the > tuple (table, header, field, priority) to a slot efficiently vs > parsing. > > IMO point (iii) and (iv) will in practice make the code much simpler > because we can maintain common middle layer and not require parsing > by drivers. Making each driver simpler by abstracting into common > layer. Before answering your points, let's consider how applications are going to be written. Not only devices do not support all possible pattern/actions combinations, they also have memory constraints. Whichever method applications use to determine if a flow rule is supported, at some point they won't be able to add any more due to device limitations. Sane applications designed to work regardless of the underlying device won't simply call abort() at this point but provide a software fallback instead. My bet is that applications will provide one every time a rule cannot be added for any reason, they won't even bother to query capabilities except perhaps for a very small subset, as in "does this device support the ID action at all?". Applications that really want/need to know at init time whether all the rules they may want to possibly create are supported will spend about the same time in both cases (query or graph). For queries, by iterating on a list of typical rules. For a graph, by walking through it. Either way, it won't be done later from the data path. I think that for an application maintainer, writing or even generating a set of typical rules will also be easier than walking through a graph. It should also be easier on the PMD side. For individual points: (i) should be doable with the query API without recompiling DPDK as well, the fact API/ABI breakage must be avoided being part of the requirements. If you think there is a problem regarding this, can you provide a specific example? (ii) as described above, I think this use case won't be very common in the wild, except for applications designed for a specific device and then they will probably know enough about it to skip the query step entirely. If time must be spent anyway, it will be in the control path at initialization time. (iii) misses the fact that capabilities evolve as flow rules get added, there is no way for PMDs to only see "valid" rules also because device limitations may prevent adding an otherwise valid rule. (iv) could be true if not for the same reason as (iii). The graph would have to be verfied again before adding another rule. Note that PMDs maintainers are encouraged to make their query function as fast as possible, they may rely on static data internally for this as well. > > Worse in my opinion it requires all drivers to write mostly duplicating > > validation code where a common layer could easily do this if every > > driver reported a common data structure representing its parse graph > > instead. The nice fallout of this initial effort upfront is the driver > > no longer needs to do error handling/checking/etc and can assume all > > rules are correct and valid. It makes driver code much simpler to > > support. And IMO at least by doing this we get some other nice benefits > > described below. About duplicated code, my usual reply is that DPDK will provide internal helper methods to assist PMDs with rules management/parsing/etc. These are not discussed in the specification because I wanted everyone to agree to the application side of things first, and it is difficult to know how much assistance PMDs might need without an initial implementation. I think this private API will be built at the same time as support is added to PMDs and maintainers notice generic code that can be shared. Documentation may be written later once things start to settle down. > > Another related question is about performance. > > > >> Creation > >> ~~~~~~~~ > >> > >> Creating a flow rule is similar to validating one, except the rule is > >> actually created. > >> > >> :: > >> > >> struct rte_flow * > >> rte_flow_create(uint8_t port_id, > >> const struct rte_flow_pattern *pattern, > >> const struct rte_flow_actions *actions); > > > > I gather this implies that each driver must parse the pattern/action > > block and map this onto the hardware. How many rules per second can this > > support? I've run into systems that expect a level of service somewhere > > around 50k cmds per second. So bulking will help at the message level > > but it seems like a lot of overhead to unpack the pattern/action section. There is indeed no guarantee on the time taken to create a flow rule, as debated with Sugesh (see the full thread): http://dpdk.org/ml/archives/dev/2016-July/043958.html I will update the specification accordingly. Consider that even 50k cmds per second may not be fast enough. Applications always need to have some kind of fallback ready, and the ability to know whether a packet has been matched by a rule is a way to help with that. In any case, flow rules must be managed from the control path, the data path must only handle consequences. > > One strategy I've used in other systems that worked relatively well > > is if the query for the parse graph above returns a key for each node > > in the graph then a single lookup can map the key to a node. Its > > unambiguous and then these operations simply become a table lookup. > > So to be a bit more concrete this changes the pattern structure in > > rte_flow_create() into a tuple where the key is known > > by the initial parse graph query. If you reserve a set of well-defined > > key values for well known protocols like ethernet, ip, etc. then the > > query model also works but the middle layer catches errors in this case > > and again the driver only gets known good flows. So something like this, > > > > struct rte_flow_pattern { > > uint32_t priority; > > uint32_t key; > > uint32_t value_length; > > u8 *value; > > } I agree that having an integer representing an entire pattern/actions combo would be great, however how do you tell whether you want matched packets to be duplicated to queue 6 and redirected to queue 3? This method can be used to check if a type of rule is allowed but not whether it is actually applicable. You still need to provide the entire pattern/actions description to create a flow rule. > > Also if we have multiple tables what do you think about adding a > > table_id to the signature. Probably not needed in the first generation > > but is likely useful for hardware with multiple tables so that it > > would be, > > > > rte_flow_create(uint8_t port_id, uint8_t table_id, ...); Not sure if I understand the table ID concept, do you mean in case a device supports entirely different sets of features depending on something? (What?) > > Finally one other problem we've had which would be great to address > > if we are doing a rewrite of the API is adding new protocols to > > already deployed DPDK stacks. This is mostly a Linux distribution > > problem where you can't easily update DPDK. > > > > In the prototype header linked in this document it seems to add new > > headers requires adding a new enum in the rte_flow_item_type but there > > is at least an attempt at a catch all here, > > > >> /** > >> * Matches a string of a given length at a given offset (in bytes), > >> * or anywhere in the payload of the current protocol layer > >> * (including L2 header if used as the first item in the stack). > >> * > >> * See struct rte_flow_item_raw. > >> */ > >> RTE_FLOW_ITEM_TYPE_RAW, > > > > Actually this is a nice implementation because it works after the > > previous item in the stack correct? Yes, this is correct. > > So you can put it after "known" > > variable length headers like IP. The limitation is it can't get past > > undefined variable length headers. RTE_FLOW_ITEM_TYPE_ANY is made for that purpose. Is that what you are looking for? > > However if you use the above parse > > graph reporting from the driver mechanism and the driver always reports > > its largest supported graph then we don't have this issue where a new > > hardware sku/ucode/etc added support for new headers but we have no > > way to deploy it to existing software users without recompiling and > > redeploying. I really would like to understand if you see a limitation regarding this with the specified API, even assuming DPDK is compiled as a shared library and thus not part of the user application. > > I looked at the git repo but I only saw the header definition I guess > > the implementation is TBD after there is enough agreement on the > > interface? Precisely, I intend to update the tree and send a v2 soon (unfortunately did not have much time these past few days to work on this). Now what if, instead of a seemingly complex parse graph and still in addition to the query method, enum values were defined for PMDs to report an array of supported items, typical patterns and actions so applications can get a quick idea of what devices are capable of without being too specific. Something like: enum rte_flow_capability { RTE_FLOW_CAPABILITY_ITEM_ETH, RTE_FLOW_CAPABILITY_PATTERN_ETH_IP_TCP, RTE_FLOW_CAPABILITY_ACTION_ID, ... }; Although I'm not convinced about the usefulness of this because it would have to be maintained separately, but that would be easier than building a dummy flow rule for simple query purposes. The main question I have for you is, do you think the core of the specified API is adequate enough assuming it can be extended later with new methods? -- Adrien Mazarguil 6WIND