From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f46.google.com (mail-wm0-f46.google.com [74.125.82.46]) by dpdk.org (Postfix) with ESMTP id 7B74E2946 for ; Wed, 4 Jan 2017 10:54:01 +0100 (CET) Received: by mail-wm0-f46.google.com with SMTP id c85so218564485wmi.1 for ; Wed, 04 Jan 2017 01:54:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=9fcuI6ZHXUXOsn/KBvC6+wngom8Gd6jZqdrUlh8DZc0=; b=PZTVq/d24YZyWRnzvQSZyy59MLmPltmqQ2wzzptdZNdiidoQWHhCUDEZP03otzmoNv zz8CqBmo82NGzQOdLzfkgJ8AWuZ+p5/3LtByq9Zi6/JK6tmEv/tZ/WPn+8n/ulIRsppO beDOvVQcOSHEydc1Df78AvbQJDP1FV+RWLrrSPcZkOolHg27ywMkiZns8x9X8EdEjwK9 7AY3GehysIGn7X4d3GR7Roj8KfonbLsVulw+8UIiQVCtj+UMtX6leVJ77Kx/yxPYEt67 zH8cgcKJyElDaNYsXqup5hujK46z2w2f3OfysE4JDa7LiWJnHM8MreNfDWfE3W+OiTuL 861A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=9fcuI6ZHXUXOsn/KBvC6+wngom8Gd6jZqdrUlh8DZc0=; b=OnUoNB38HGkJqsFYDf5GKLO4veb4NxjTIiR6E8yjwKxQ+CvtaqMSY9yD8AhUqwBYj4 xmuvNvA5rGIr8Z6+EzP/9GN9awV7FAc1Cr2vLR9bFDOcF5o6C4Ed8YsCau07QnRgnOfC vj6VxLncpGLTdacEdKhUHso7VNc/vLe2Xlj0Y9do7fS6elQV7Wu+57LDmR5FrpKvuS5S 0/0YR0VZRLBQtsFCc6ciakuCailxweEjYMquc9GhePzPCgHAJrevE2mkbu/4WWHTlw0w MwsCRbuMyq84nyB3NnJ2pSUBDO3Q3eo5KVn3A33qjc7sXL/WRwk0DKl3IQh6Rqiq/3WL IUXw== X-Gm-Message-State: AIkVDXJ5TVkZvGtDH5rSdNlzp67A/ga5/ViFgVNAozYgz+67Nn5MCqPHF1qG/fgRLk2rxJ6z X-Received: by 10.28.11.134 with SMTP id 128mr43681444wml.80.1483523641073; Wed, 04 Jan 2017 01:54:01 -0800 (PST) Received: from penelope.horms.nl ([217.111.208.18]) by smtp.gmail.com with ESMTPSA id cl10sm97064404wjb.4.2017.01.04.01.53.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 04 Jan 2017 01:53:59 -0800 (PST) Date: Wed, 4 Jan 2017 10:53:50 +0100 From: Simon Horman To: Adrien Mazarguil Cc: dev@dpdk.org Message-ID: <20170104095347.GA24762@penelope.horms.nl> References: <20161221161914.GA14515@penelope.horms.nl> <20161222124804.GD10340@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161222124804.GD10340@6wind.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH v2 00/25] Generic flow API (rte_flow) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jan 2017 09:54:01 -0000 On Thu, Dec 22, 2016 at 01:48:04PM +0100, Adrien Mazarguil wrote: > On Wed, Dec 21, 2016 at 05:19:16PM +0100, Simon Horman wrote: > > On Fri, Dec 16, 2016 at 05:24:57PM +0100, Adrien Mazarguil wrote: > > > As previously discussed in RFC v1 [1], RFC v2 [2], with changes > > > described in [3] (also pasted below), here is the first non-draft series > > > for this new API. > > > > > > Its capabilities are so generic that its name had to be vague, it may be > > > called "Generic flow API", "Generic flow interface" (possibly shortened > > > as "GFI") to refer to the name of the new filter type, or "rte_flow" from > > > the prefix used for its public symbols. I personally favor the latter. > > > > > > While it is currently meant to supersede existing filter types in order for > > > all PMDs to expose a common filtering/classification interface, it may > > > eventually evolve to cover the following ideas as well: > > > > > > - Rx/Tx offloads configuration through automatic offloads for specific > > > packets, e.g. performing checksum on TCP packets could be expressed with > > > an egress rule with a TCP pattern and a kind of checksum action. > > > > > > - RSS configuration (already defined actually). Could be global or per rule > > > depending on hardware capabilities. > > > > > > - Switching configuration for devices with many physical ports; rules doing > > > both ingress and egress could even be used to completely bypass software > > > if supported by hardware. Hi Adrien, apologies for not replying for some time due to my winter vacation. > Hi Simon, > > > Hi Adrien, > > > > thanks for this valuable work. > > > > I would like to ask some high level questions on the proposal. > > I apologise in advance if any of these questions are based on a > > misunderstanding on my part. > > > > * I am wondering about provisions for actions to modify packet data or > > metadata. I do see support for marking packets. Is the implication of > > this that the main focus is to provide a mechanism for classification > > with the assumption that any actions - other than drop and variants of > > output - would be performed elsewhere? > > I'm not sure to understand what you mean by "elsewhere" here. Packet marking > as currently defined is a purely ingress action, i.e. HW matches some packet > and returns a user-defined tag in related meta-data that the PMD copies to > the appropriate mbuf structure field before returning it to the application. By elsewhere I meant in the application, sorry for being unclear. > There is provision for egress rules and I wrote down a few ideas describing > how they could be useful (as above), however they remain to be defined. > > > If so I would observe that this seems somewhat limiting in the case of > > hardware that can perform a richer set of actions. And seems particularly > > limiting on egress as there doesn't seem anywhere else that other actions > > could be performed after classification is performed by this API. > > A single flow rule may contain any number of distinct actions. For egress, > it means you could wrap matching packets in VLAN and VXLAN at once. > > If you wanted to perform the same action twice on matching packets, you'd > have to provide two rules with defined priorities and use a non-terminating > action for the first one: > > - Rule with priority 0: match UDP -> add VLAN 42, passthrough > - Rule with priority 1: match UDP -> add VLAN 64, terminating > > This is how automatic QinQ would be defined for outgoing UDP packets. Ok understood. I have two follow-up questions: 1. Is the "add VLAN" action included at this time, I was not able to find it 2. Was consideration given to allowing multiple actions in a single rule? I see there would be some advantage to that if classification is expensive. > > * I am curious to know what considerations have been given to supporting support for tunnelling (encapsulation and decapsulation of e.g. VXLAN), > > tagging (pushing and popping e.g. VLANs), and labels (pushing or popping > > e.g. MPLS). > > > > Such features seem would useful for application of this work in a variety > > of situations including overlay networks and VNFs. > > This is also what I had in mind and we'd only have to define specific > ingress/egress actions for these. Currently rte_flow only implements a basic > set of existing features from the legacy filtering framework, but is meant > to be extended. Thanks. I think that answers most of my questions: what I see as missing in terms of actions can be added. > > * I am wondering if any thought has gone into supporting matching on the > > n-th instance of a field that may appear more than once: e.g. VLAN tag. > > Sure, please see the latest documentation [1] and testpmd examples [2]. > Pattern items being stacked in the same order as protocol layers, maching > specific QinQ traffic and redirecting it to some queue could be expressed > with something like: > > testpmd> flow create 0 ingress pattern eth / vlan vid is 64 / vlan vid is 42 / end > actions queue 6 / end > > Such a rule is translated as-is to rte_flow pattern items and action > structures. Thanks, I will look over that. > > With the above questions in mind I am curious to know what use-cases > > the proposal is targeted at. > > Well, it should be easier to answer if you have a specific use-case in mind > you would like to support but that cannot be expressed with the API as > defined in [1], in which case please share it with the community. A use-case would be implementing OvS DPIF flow offload using this API. > [1] http://dpdk.org/ml/archives/dev/2016-December/052954.html > [2] http://dpdk.org/ml/archives/dev/2016-December/052975.html