From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E74FA0513; Tue, 9 Jun 2020 18:01:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 341472C15; Tue, 9 Jun 2020 18:01:44 +0200 (CEST) Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by dpdk.org (Postfix) with ESMTP id CFE151C01 for ; Tue, 9 Jun 2020 18:01:42 +0200 (CEST) Received: by mail-il1-f196.google.com with SMTP id t8so20833364ilm.7 for ; Tue, 09 Jun 2020 09:01:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=r6xtsSerAs1cQ5LFzwN/MkDNdexaKibfmwno3wCCph8=; b=NtZFakKkh0zVg1RjgZ9pH9SsbIdM+dp17updPwh15x8RdLx8eq6wyRfEfvNepsX6hF cfTp4A07D4YEGOoFqjfq8fa7PXkEZo3Vhf4nOtqajsmsb3QbTDQHGCLR+CMOFD1C6ySw te5EwJ8/ScjOI4BUlRBRla7votbprgL4amLZVnL9vq7IgkO/213/Fzj7gS+RCxafUIEA gDSYsGbZNfglxAGDiiyoGI2YO5xFOMNZjjSeHEzf6AKYlKSOz1HHfoENEMKyecY0TvRu Tm/0MnhuQpUGhn1rGvmICjoojZsGZJNcihZB0M0r+kaTJOyubuva5kQdf0mqqdmoAtOC PAhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=r6xtsSerAs1cQ5LFzwN/MkDNdexaKibfmwno3wCCph8=; b=EknLHLjTZVk5FROhImLFMVY/qq2OMlKcsl17ZU98w2IjwcvAKce4JtrTU7wokEPHmr w1tx8dl3tEkRlNRYTX1Q7gkqarCodN1Ov7RchpMfk5HCBv++yeDtgJ92edDOXzcDydPC yTcbc0SQQ0kmCxVWsnQf6UKaOchBtCX/fvt/3iSciGW04rSqD2Mb6yiJQY8oTRYkpQ2W wNnJrBAh5nHemHwGSHLfGRhYEXe9KCTlzL9p+j5TKnyPTrzeuAEJZ9+VtcQ74+Us+Nb8 sdtDF2ze2XiV4YsFE//xtlRQM3mgQVJnEpZCl0KxcMx0GiIPLtyO5KClL2L3WCqNS1Lz SCCw== X-Gm-Message-State: AOAM532pfNqILmolpZoDuVZzcAuGrh6n0jJN+vmfABPeJ7EUlTIEP4Hf 6f9ZJx7rHu9tqSYC3jAKFuxo2ZK/2BUagTOsmbU= X-Google-Smtp-Source: ABdhPJxwocQgEL80j4E9HfAFKsQDfvpnA+2tC6nG+tzH6SVFrrEcH1CsBo/jM9dlKZhcJsKUbTlFr6sUvXK3EmmvMbQ= X-Received: by 2002:a92:dc47:: with SMTP id x7mr28208156ilq.130.1591718501984; Tue, 09 Jun 2020 09:01:41 -0700 (PDT) MIME-Version: 1.0 References: <20200520091801.30163-1-andrey.vesnovaty@gmail.com> In-Reply-To: From: Jerin Jacob Date: Tue, 9 Jun 2020 21:31:25 +0530 Message-ID: To: Andrey Vesnovaty Cc: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ori Kam , dpdk-dev , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Rosen Xu , Beilei Xing , jia.guo@intel.com, Rasesh Mody , Shahed Shaikh , Nithin Dabilpuram , Kiran Kumar K , Qiming Yang , Qi Zhang , "Wiles, Keith" , Hemant Agrawal , Sachin Saxena , wei.zhao1@intel.com, John Daley , Hyong Youb Kim , Chas Williams , Matan Azrad , Shahaf Shuler , Slava Ovsiienko , Rahul Lakkireddy , Gaetan Rivet , Liron Himi , Jingjing Wu , "Wei Hu (Xavier" , "Min Hu (Connor" , Yisen Zhuang , Ajit Khaparde , Somnath Kotur , Jasvinder Singh , Cristian Dumitrescu Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [RFC] add flow action context API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Jun 4, 2020 at 9:27 PM Andrey Vesnovaty wrote: > > On Thu, Jun 4, 2020 at 3:37 PM Jerin Jacob wrote: >> >> I would suggest adding rte_flow driver implementers if there is a >> change in rte_flow_ops in RFC so that >> your patch will get enough. I added the maintainers of rte_flow PMD[1] >> implementers in Cc. >> >> >> >> >> >> Would the following additional API suffice the motivation? >> >> >> >> rte_flow_modify_action(struct rte_flow * flow, const struct >> >> rte_flow_action actions[]) >> > >> > >> > This API limits the scope to single flow which isn't the goal for the proposed change. >> >> Yes. But we need to find the balance between HW features(driver >> interface) and public API? >> Is Mellanox HW has the support for native shared HW ACTION context? > > > Yes, I'm working on a shared context for RSS action, patched will be available in a couple of weeks. > Other candidates for this kind of API extension are counters/meters. OK. > >> >> if so, it makes sense to have such a fat public API as you suggested. >> If not, it is a matter of application to iterate over needed flows to >> modify action through rte_flow_modify_action(). >> >> Assume Mellanox HW has native HW ACTION context and the majority of >> the other HW[1] does not >> have, then IMO, We should have common code, to handle in this complex >> state machine >> implementation of action_ctx_create, action_ctx_destroy, >> rte_flow_action_ctx_modify, action_ctx_query >> in case PMD does not support it. (If PMD/HW supports it then it can >> use native implementation) > > > Does it mean that all PMDs will support action-ctx create/destroy/update for all types of actions? That can be based on following _driver_ op interface return: flow_modify_action(struct rte_flow * flow, const struct rte_flow_action actions[]) > >> Reason: >> 1) I think, All the HW will have the the option to update the ACTION >> for the given flow. >> octeontx2 has it. If not, let's discuss what is typical HW abstraction >> ACTION only update. > > >> >> 2) This case can be implemented if PMD just has flow_modify_action() support. >> Multiple flows will be matter will be iterating over all registered flow. > > > General case won't be just iteration over all flows but: > 1. destroy flow > 2. create "modified" action > 3. create flow with the action from (2) > Is this what you mean by "common code" to handle action-ctx create/destroy/update implementation? Yes. Not sure why to destroy the flow if the only action is getting updated. (2) and (3) driver op can be just implemented over, flow_modify_action(struct rte_flow * flow, const struct rte_flow_action actions[]) > >> >> 3) Avoid code duplication on all of the below PMDs[1] >> >> >> >> [1] >> drivers/net/hinic/hinic_pmd_flow.c:const struct rte_flow_ops hinic_flow_ops = { >> drivers/net/ipn3ke/ipn3ke_flow.h:extern const struct rte_flow_ops >> ipn3ke_flow_ops; >> drivers/net/i40e/i40e_flow.c:const struct rte_flow_ops i40e_flow_ops = { >> drivers/net/qede/qede_filter.c:const struct rte_flow_ops qede_flow_ops = { >> drivers/net/octeontx2/otx2_flow.h:extern const struct rte_flow_ops >> otx2_flow_ops; >> drivers/net/ice/ice_generic_flow.h:extern const struct rte_flow_ops >> ice_flow_ops; >> drivers/net/tap/tap_flow.c:static const struct rte_flow_ops tap_flow_ops = { >> drivers/net/dpaa2/dpaa2_ethdev.h:extern const struct rte_flow_ops >> dpaa2_flow_ops; >> drivers/net/e1000/e1000_ethdev.h:extern const struct rte_flow_ops igb_flow_ops; >> drivers/net/enic/enic_flow.c:const struct rte_flow_ops enic_flow_ops = { >> drivers/net/bonding/rte_eth_bond_flow.c:const struct rte_flow_ops >> bond_flow_ops = { >> drivers/net/mlx5/mlx5_flow.c:static const struct rte_flow_ops mlx5_flow_ops = { >> drivers/net/igc/igc_flow.h:extern const struct rte_flow_ops igc_flow_ops; >> drivers/net/cxgbe/cxgbe_flow.c:static const struct rte_flow_ops >> cxgbe_flow_ops = { >> drivers/net/failsafe/failsafe_private.h:extern const struct >> rte_flow_ops fs_flow_ops; >> drivers/net/mvpp2/mrvl_flow.c:const struct rte_flow_ops mrvl_flow_ops = { >> drivers/net/iavf/iavf_generic_flow.c:const struct rte_flow_ops iavf_flow_ops = { >> drivers/net/hns3/hns3_flow.c:static const struct rte_flow_ops hns3_flow_ops = { >> drivers/net/bnxt/bnxt_flow.c:const struct rte_flow_ops bnxt_flow_ops = { >> drivers/net/mlx4/mlx4_flow.c:static const struct rte_flow_ops mlx4_flow_ops = { >> drivers/net/sfc/sfc_flow.c:const struct rte_flow_ops sfc_flow_ops = { >> drivers/net/softnic/rte_eth_softnic_flow.c:const struct rte_flow_ops >> pmd_flow_ops = { >> drivers/net/ixgbe/ixgbe_ethdev.h:extern const struct rte_flow_ops >> ixgbe_flow_ops;