From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C56CA053A; Sun, 12 Jul 2020 18:35:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8C2C21D168; Sun, 12 Jul 2020 18:35:01 +0200 (CEST) Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by dpdk.org (Postfix) with ESMTP id 331BC1D164 for ; Sun, 12 Jul 2020 18:35:00 +0200 (CEST) Received: by mail-qk1-f169.google.com with SMTP id c30so10031727qka.10 for ; Sun, 12 Jul 2020 09:35:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+xvbxk0T44DOZkN6hL3SI2+u7myejFLRUpNQnw5vn30=; b=KbhacckvCtN/5PYTPNfY4al1lqFnIQXPQXoWnebtPA/FoUifgc0oHJdMPlskex2LQj HejsPvT8fYyYbO8JAwfjgVDD5rTy5RZNlSblQQXw0JGw/93VHXMT0g9/9budr2XaVK1M Gg1UHwjr37rBphpNDGqnDwvF3ESUbP0kdk55M4jKhZPW5wMkFnLf5oVspV9Ekp47+fHQ SEKVxC3t0oD15ZrcXScf90bxTIEJ0E17U3WDLMJGPKb8OEHv/15/XK4j53cLbHKC+brd eaytVO9ZLusJtfxtLbfCH1nTHzduW69KYhjOUj1Hn8UT8QQ6Qwm8DzvNsEYG2+JLAwTh 2TSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+xvbxk0T44DOZkN6hL3SI2+u7myejFLRUpNQnw5vn30=; b=i2yOOf51Iv/8D9Sx0Ckhmnz6bbQj+dWDN6s8v7m6NP3OiN76mX1jMw925mOFGoDH5X o6fDQLYP6SUgHEjTw8UBal6PfsFYvP8dvyN9gelVHIJxOSqkIM+w5xmdXaitQrhk9g8f 2B+8gRMiMcdo0gzzV1NuH/YndH3oI0hJSiQ8Qa/7HT48xm4//61GVdrMJWII+GUMVHlf q7O5wugJUlYpYEvYyPyk8xjEfS6FzdVi4PAdMp0fGHhi64i6KJ/fbsPB3WQbV9zOaETp 6/oYxLjGr/E8MEBZZRWH3+kMx3Vyv6ycNNPzYPMECCVkqw7JSahgHxGpH2fGa1I/9m7q j1/w== X-Gm-Message-State: AOAM532MYH/6B4UEZWcOn5zajLE74XgfpMA/OLW0vN0hdaYY0KRAfxix l1FHFiPmTj5b4E9+d6/1+iSEJjI9gSY/oQeimRU= X-Google-Smtp-Source: ABdhPJzK8XsV7ZXrzLo/RTn2k8VnVMid50kmvInFujt3JXHD+JhQAfxmWH9MXF638O9ishnSKNOMbYNq+atGIT0rAZU= X-Received: by 2002:a37:d91:: with SMTP id 139mr73578140qkn.291.1594571699375; Sun, 12 Jul 2020 09:34:59 -0700 (PDT) MIME-Version: 1.0 References: <5862610e-76cc-7783-7d66-2b2173eeb974@mellanox.com> In-Reply-To: <5862610e-76cc-7783-7d66-2b2173eeb974@mellanox.com> From: William Tu Date: Sun, 12 Jul 2020 09:34:22 -0700 Message-ID: To: Oz Shlomo Cc: dev@dpdk.org, Thomas Monjalon , Ori Kam , Eli Britstein , Sriharsha Basavapatna , Hemal Shah Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [RFC] - Offloading tunnel ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Oz, I started to learn about this and have a couple of questions below. Thank you in advance. On Tue, Jun 9, 2020 at 8:07 AM Oz Shlomo wrote: > > Rte_flow API provides the building blocks for vendor agnostic flow > classification offloads. The rte_flow match and action primitives are fine > grained, thus enabling DPDK applications the flexibility to offload network > stacks and complex pipelines. > > Applications wishing to offload complex data structures (e.g. tunnel virtual > ports) are required to use the rte_flow primitives, such as group, meta, mark, > tag and others to model their high level objects. > > The hardware model design for high level software objects is not trivial. > Furthermore, an optimal design is often vendor specific. > > The goal of this RFC is to provide applications with the hardware offload > model for common high level software objects which is optimal in regards > to the underlying hardware. > > Tunnel ports are the first of such objects. > > Tunnel ports > ------------ > Ingress processing of tunneled traffic requires the classification > of the tunnel type followed by a decap action. > > In software, once a packet is decapsulated the in_port field is changed > to a virtual port representing the tunnel type. The outer header fields > are stored as packet metadata members and may be matched by proceeding > flows. > > Openvswitch, for example, uses two flows: > 1. classification flow - setting the virtual port representing the tunnel type > For example: match on udp port 4789 actions=tnl_pop(vxlan_vport) > 2. steering flow according to outer and inner header matches > match on in_port=vxlan_vport and outer/inner header matches actions=forward to port X > The benefits of multi-flow tables are described in [1]. > > Offloading tunnel ports > ----------------------- > Tunnel ports introduce a new stateless field that can be matched on. > Currently the rte_flow library provides an API to encap, decap and match > on tunnel headers. However, there is no rte_flow primitive to set and > match tunnel virtual ports. > > There are several possible hardware models for offloading virtual tunnel port > flows including, but not limited to, the following: > 1. Setting the virtual port on a hw register using the rte_flow_action_mark/ > rte_flow_action_tag/rte_flow_set_meta objects. > 2. Mapping a virtual port to an rte_flow group > 3. Avoiding the need to match on transient objects by merging multi-table > flows to a single rte_flow rule. > > Every approach has its pros and cons. > The preferred approach should take into account the entire system architecture > and is very often vendor specific. Are these three solutions mutually exclusive? And IIUC, based on the description below, you're proposing solution 1, right? and the patch on OVS is using solution 2? https://patchwork.ozlabs.org/project/openvswitch/cover/20200120150830.16262-1-elibr@mellanox.com/ > > The proposed rte_flow_tunnel_port_set helper function (drafted below) is designed > to provide a common, vendor agnostic, API for setting the virtual port value. > The helper API enables PMD implementations to return vendor specific combination of > rte_flow actions realizing the vendor's hardware model for setting a tunnel port. > Applications may append the list of actions returned from the helper function when > creating an rte_flow rule in hardware. > > Similarly, the rte_flow_tunnel_port_match helper (drafted below) allows for > multiple hardware implementations to return a list of fte_flow items. > And if we're using solution 1 "Setting the virtual port on a hw register using the rte_flow_action_mark/ rte_flow_action_tag/rte_flow_set_meta objects." For the classification flow, does that mean HW no longer needs to translate tnl_pop to mark + jump, but the HW can directly execute the tnl_pop(vxlan_vport) action because the outer header is saved using rte_flow_set_meta? > Miss handling > ------------- > Packets going through multiple rte_flow groups are exposed to hw misses due to > partial packet processing. In such cases, the software should continue the > packet's processing from the point where the hardware missed. > > We propose a generic rte_flow_restore structure providing the state that was > stored in hardware when the packet missed. > > Currently, the structure will provide the tunnel state of the packet that > missed, namely: > 1. The group id that missed > 2. The tunnel port that missed > 3. Tunnel information that was stored in memory (due to decap action). > In the future, we may add additional fields as more state may be stored in > the device memory (e.g. ct_state). > > Applications may query the state via a new rte_flow_get_restore_info(mbuf) API, > thus allowing a vendor specific implementation. > Thanks William