From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA26BA00C4; Thu, 4 Jun 2020 17:57:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5FC3A1D5E2; Thu, 4 Jun 2020 17:57:51 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 3D4761D57F for ; Thu, 4 Jun 2020 17:57:50 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id z18so7904619lji.12 for ; Thu, 04 Jun 2020 08:57:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=kS1wmbh+CqleS+zNLTl3+/kfdgD1dLzLXbCG/dB9fv8=; b=Nrd0fS4cP+JncPOSVxNRK5LQPykTPx9T4zY5qRG6LmM0inMtPn5T6heNqGx4z0/xPA UyuBRIpxXAX8JLioKtPQllkZXmGkfV0azM8g8sGCWU8Ieaj0L4x+TM7zwaFo9LWCmvuL BIZN0IzUsBI39HgysHUWsJEzM2Xv6FUmwflg95va016ofyf82ekEJlByYkThHXl/v5Hr EP0kVsncjkd2yGwOqgZMIdmwyiUg+k1TZt6PDjiiG1jNxCVmCwINCfGm1eXQGlfcs2CA XFvT8REUFezTgfMT9P9nAyOPs5o/16lCdItePke2ddtScA2sO0PWCnHts+mQ0mih5GMw uHbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=kS1wmbh+CqleS+zNLTl3+/kfdgD1dLzLXbCG/dB9fv8=; b=YEGNtD9YDSEmDRyltm6Ff2VesxMOKwKZc38fM/FEtMfAgooK7RYPDe/qAGC/RdbFrn +QB0SWIBkZ+E2Y0f8+UE/1sXNqe3y+dvD6bM2JfdTosAzfkC3SBev/NsGtU9RbqTpsXH W6tXgzLkiVD1HP1A+XjcH5IUX/kOa3AQLEHej4rvIIhJhAYqDsHuMGKtU8jMCU28qeVH o1tSIsXuHAT6c/+ICrblTy9jpkzVx6/eZtABJotSZktobYp2OGT2r/q+xYZykFtIzOTK CEoy1sAEwYLbu83hEA5IYa/zFnVPtbC/LZ/yK18NWRIdLwdr7H552HlAFn4kndcHYxfL mFow== X-Gm-Message-State: AOAM530w9gLevwIC1ISnowPjoAVtzav+1GdDDZCMysFmE48kQHklodqh HhrwowwmhopZUDZObYiLy3nTO5m2rRw8kPydjfg= X-Google-Smtp-Source: ABdhPJxXmd3svvQWcNTyCIxKedttphRPpTXbb7AHStnFh7fJ6pLJrdSDxiOXXlErGAYmOqsWpM8iZ3I8yQNA7NOBwHo= X-Received: by 2002:a2e:910c:: with SMTP id m12mr2622423ljg.332.1591286269373; Thu, 04 Jun 2020 08:57:49 -0700 (PDT) MIME-Version: 1.0 References: <20200520091801.30163-1-andrey.vesnovaty@gmail.com> In-Reply-To: From: Andrey Vesnovaty Date: Thu, 4 Jun 2020 18:57:38 +0300 Message-ID: To: Jerin Jacob Cc: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ori Kam , dpdk-dev , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Rosen Xu , Beilei Xing , jia.guo@intel.com, Rasesh Mody , Shahed Shaikh , Nithin Dabilpuram , Kiran Kumar K , Qiming Yang , Qi Zhang , "Wiles, Keith" , Hemant Agrawal , Sachin Saxena , wei.zhao1@intel.com, John Daley , Hyong Youb Kim , Chas Williams , Matan Azrad , Shahaf Shuler , Slava Ovsiienko , Rahul Lakkireddy , Gaetan Rivet , Liron Himi , Jingjing Wu , "Wei Hu (Xavier" , "Min Hu (Connor" , Yisen Zhuang , Ajit Khaparde , Somnath Kotur , Jasvinder Singh , Cristian Dumitrescu Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [RFC] add flow action context API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Jun 4, 2020 at 3:37 PM Jerin Jacob wrote: > I would suggest adding rte_flow driver implementers if there is a > change in rte_flow_ops in RFC so that > your patch will get enough. I added the maintainers of rte_flow PMD[1] > implementers in Cc. > > > >> > >> Would the following additional API suffice the motivation? > >> > >> rte_flow_modify_action(struct rte_flow * flow, const struct > >> rte_flow_action actions[]) > > > > > > This API limits the scope to single flow which isn't the goal for the > proposed change. > > Yes. But we need to find the balance between HW features(driver > interface) and public API? > Is Mellanox HW has the support for native shared HW ACTION context? > Yes, I'm working on a shared context for RSS action, patched will be available in a couple of weeks. Other candidates for this kind of API extension are counters/meters. > if so, it makes sense to have such a fat public API as you suggested. > If not, it is a matter of application to iterate over needed flows to > modify action through rte_flow_modify_action(). > > Assume Mellanox HW has native HW ACTION context and the majority of > the other HW[1] does not > have, then IMO, We should have common code, to handle in this complex > state machine > implementation of action_ctx_create, action_ctx_destroy, > rte_flow_action_ctx_modify, action_ctx_query > in case PMD does not support it. (If PMD/HW supports it then it can > use native implementation) > Does it mean that all PMDs will support action-ctx create/destroy/update for all types of actions? Reason: > 1) I think, All the HW will have the the option to update the ACTION > for the given flow. > octeontx2 has it. If not, let's discuss what is typical HW abstraction > ACTION only update. > > 2) This case can be implemented if PMD just has flow_modify_action() > support. > Multiple flows will be matter will be iterating over all registered flow. > General case won't be just iteration over all flows but: 1. destroy flow 2. create "modified" action 3. create flow with the action from (2) Is this what you mean by "common code" to handle action-ctx create/destroy/update implementation? > > 3) Avoid code duplication on all of the below PMDs[1] > > > > [1] > drivers/net/hinic/hinic_pmd_flow.c:const struct rte_flow_ops > hinic_flow_ops = { > drivers/net/ipn3ke/ipn3ke_flow.h:extern const struct rte_flow_ops > ipn3ke_flow_ops; > drivers/net/i40e/i40e_flow.c:const struct rte_flow_ops i40e_flow_ops = { > drivers/net/qede/qede_filter.c:const struct rte_flow_ops qede_flow_ops = { > drivers/net/octeontx2/otx2_flow.h:extern const struct rte_flow_ops > otx2_flow_ops; > drivers/net/ice/ice_generic_flow.h:extern const struct rte_flow_ops > ice_flow_ops; > drivers/net/tap/tap_flow.c:static const struct rte_flow_ops tap_flow_ops = > { > drivers/net/dpaa2/dpaa2_ethdev.h:extern const struct rte_flow_ops > dpaa2_flow_ops; > drivers/net/e1000/e1000_ethdev.h:extern const struct rte_flow_ops > igb_flow_ops; > drivers/net/enic/enic_flow.c:const struct rte_flow_ops enic_flow_ops = { > drivers/net/bonding/rte_eth_bond_flow.c:const struct rte_flow_ops > bond_flow_ops = { > drivers/net/mlx5/mlx5_flow.c:static const struct rte_flow_ops > mlx5_flow_ops = { > drivers/net/igc/igc_flow.h:extern const struct rte_flow_ops igc_flow_ops; > drivers/net/cxgbe/cxgbe_flow.c:static const struct rte_flow_ops > cxgbe_flow_ops = { > drivers/net/failsafe/failsafe_private.h:extern const struct > rte_flow_ops fs_flow_ops; > drivers/net/mvpp2/mrvl_flow.c:const struct rte_flow_ops mrvl_flow_ops = { > drivers/net/iavf/iavf_generic_flow.c:const struct rte_flow_ops > iavf_flow_ops = { > drivers/net/hns3/hns3_flow.c:static const struct rte_flow_ops > hns3_flow_ops = { > drivers/net/bnxt/bnxt_flow.c:const struct rte_flow_ops bnxt_flow_ops = { > drivers/net/mlx4/mlx4_flow.c:static const struct rte_flow_ops > mlx4_flow_ops = { > drivers/net/sfc/sfc_flow.c:const struct rte_flow_ops sfc_flow_ops = { > drivers/net/softnic/rte_eth_softnic_flow.c:const struct rte_flow_ops > pmd_flow_ops = { > drivers/net/ixgbe/ixgbe_ethdev.h:extern const struct rte_flow_ops > ixgbe_flow_ops; >