DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Lukáš Šišmiš" <sismis@cesnet.cz>
Cc: dev@dpdk.org
Subject: Re: RFCv1: DPDK RTE Flow Rule Parser
Date: Fri, 7 Nov 2025 08:07:27 -0800	[thread overview]
Message-ID: <20251107080727.13e201a1@phoenix> (raw)
In-Reply-To: <be773672-77bb-4c2d-ba12-43b75ff29507@cesnet.cz>

On Fri, 7 Nov 2025 15:16:28 +0100
Lukáš Šišmiš <sismis@cesnet.cz> wrote:

> Hello all,
> 
> ## Motivation
> 
> Recent discussions on DPDK Slack raised the idea of extracting the 
> rte_flow rule parser currently embedded in dpdk-testpmd into a 
> standalone, reusable library [1].
> 
> The main motivation is that the external applications, such as Suricata 
> IDS [2], often need to express hardware filtering rules in a consistent 
> and human-readable format.
> 
> When integrating rte_flow into Suricata [3], we encountered the lack of 
> a unified way to define such rules. The immediate need was to let users 
> specify input filters (drop/allow) determining which traffic should be 
> inspected.
> 
> Suricata’s existing capture modes (e.g. AF-PACKET) rely on BPF filters 
> [4]. Maintaining consistency across Suricata capture backends would be 
> ideal, but BPF and rte_flow differ significantly in expressiveness.
> 
> The other options include either dpdk-testpmd or custom rule syntax. To 
> not reinvent the wheel, I am leaning towards use of the testpmd syntax 
> for the ready-to-use generic expressibility, especiaily of the network 
> traffic patterns. For the reference, I am speaking of the rte_flow rule 
> syntax that you can define through testpmd CLI, e.g., "flow create 0 
> ingress pattern eth / vlan vid is 0xabc / ipv4 src is 192.168.0.1 src is 
> 53 / tcp / end actions drop / end".
> 
> In the Slack, Thomas Monjalon concluded that it is generally welcome to 
> see a new parser library but we need to state it is just one way how 
> create rte_flow C structures. (Fine by me)
> 
> ## Library proposal
> 
> The existing function flow_parse() in dpdk-testpmd already performs most 
> of the needed work:
> 
> int
> flow_parse(const char *src, void *result, unsigned int size,
>        struct rte_flow_attr **attr,
>        struct rte_flow_item **pattern, struct rte_flow_action **actions)
> 
> It parses a rule expressed in testpmd syntax and initializes rte_flow 
> attributes, items, and actions.
> External applications that use these structures directly can skip 
> redundant setup logic and rely on standard DPDK APIs (validate, create, 
> destroy).
> 
> For a public API, the void *result and unsigned int size parameters 
> appear unnecessary and could be removed. The simplified interface would 
> only expose the meaningful outputs (attr, pattern, actions).
> 
> ## Problem statement
> 
> The main question is how to provide this parser without fragmenting 
> existing functionality.
> 
> I would like to extract the existing code from dpdk-testpmd to have one 
> parser that is available and used by both testpmd and external apps 
> (using the library itself).
> I quickly run into the complexity of the testpmd code and how entangled 
> the C structures are throughout the testpmd's source code.
> While the parser extraction should be possible, I wanted to check here 
> with the community if that is the most preferred approach.
> Since the extraction moves a lot of code from place to another, there is 
> a very good chance that it would break all forked custom testpmds.
> 
> The other alternative is to "start simple" with an alternative 
> implementation, perhaps only focusing on subset of testpmd's parser 
> capabilities. But this would very likely lead to two places being 
> maintained independently.
> 
> Before taking either route, I’d like to understand the community’s 
> preference:
> - Do you even see it as a valuable contribution for customer applications?
> - Can you possibly think of an alternative way to solve the unified 
> human-readable format conversion? Both on the code level and interface 
> level.
> - Is testpmd code extraction the right long-term solution, even if 
> disruptive? Should the private DPDK forks be taken into consideration? 
> Or should I start with a separate lightweight parser and revisit 
> integration later?
> 
> Any other feedback is welcome.
> 
> 
> Thank you.
> 
> All the best,
> Lukas
> 
> 
> [1] https://dpdkproject.slack.com/archives/CB2UPBU48/p1759765888891329
> [2] https://github.com/OISF/suricata
> [3] https://github.com/OISF/suricata/pull/13950
> [4] https://docs.suricata.io/en/latest/performance/ignoring-traffic.html
> 
> 

Seems like a good place to see what any of the AI tools can do.
Would also be good to use standard parsing tools (lex + yacc) rather than
doing all the parsing with open coded C string handling.

Ignore private DPDK forks, we can't test them. If you build it they will come to the new code.

      reply	other threads:[~2025-11-07 16:07 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-07 14:16 Lukáš Šišmiš
2025-11-07 16:07 ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251107080727.13e201a1@phoenix \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=sismis@cesnet.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).