From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 0D9CFA00C5;
	Thu,  4 Jun 2020 14:37:12 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 6F2971D5D5;
	Thu,  4 Jun 2020 14:37:10 +0200 (CEST)
Received: from mail-io1-f68.google.com (mail-io1-f68.google.com
 [209.85.166.68]) by dpdk.org (Postfix) with ESMTP id 1544C1D170
 for <dev@dpdk.org>; Thu,  4 Jun 2020 14:37:09 +0200 (CEST)
Received: by mail-io1-f68.google.com with SMTP id y5so6067876iob.12
 for <dev@dpdk.org>; Thu, 04 Jun 2020 05:37:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=l8wtdCA5Xf1AlX3TStshrQHweMr2qJ6YjlAAhyYxw9U=;
 b=fQaWvfpri42K0p8r5Vy0oRuuN33QAYNuUpIrc11d6DIXF0e/MxS8aWkn6D5lSv6kz/
 NJgN/YmV/Zs0vhke+FdIS2ceCvKWml8w9SAQBYxS6kWUYli3LQpWQTGy66R9yoqyFeIY
 nsXnMaIZLsEhXC1Ke3Xd3WJnnL7aRQkCChrEpiX0cBYMNiY07vS906gjc9yK/xus333y
 5vZGJ25OuG2Uc0+mT1G2J4IYR4T4p5SXxga/VJDSXllzNFhomROlWjFQ/RAxJ5thfGoR
 FNSurS5c/NHO/+8qPHAz07/Xxft5YH5bsdkr5hxIaRuXK8BYjYUvn/DtHWFrMdkMS6yQ
 bsfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=l8wtdCA5Xf1AlX3TStshrQHweMr2qJ6YjlAAhyYxw9U=;
 b=qCSPqAu/RM+3Hx+aL/60qzhZSR00zf683ob1Yobm0WacHYc7OFhXdpY7ezGR83eJMy
 FDmC5+pelxvOmJVjsaCYa2YYtTzYZS/xmN3M1wK0PCQumWaqejtEkMJtI4Dh+DyJAcs5
 npDbCOC/7Sgqbij2TdmI40rpzhExL/9Cv6ET3gUCQUQIjhbxR/6mO9U7k+ZqiDp5Tr/x
 l1VsLAPERy7HIeJHLelCSjxwedVkQbf/JepTYGp2tXzIgdVzg66Z5uw3+yMkWhP5eNWp
 6yxKpaWUto6xUMz4NgU+xXPuxaLZKDZEH85ywRLc4OYvUAhjPi0U0gX+TU4jywE0S/we
 rbPg==
X-Gm-Message-State: AOAM532XMc1oTgoJxvZZIOvP/0/ndD2SLsO40rMHE5wTazL6Rna/jQnH
 bf17eKfi6XhQW54/sEi/CjO1bUlHK4xPf9qA26M=
X-Google-Smtp-Source: ABdhPJyG8bem77VorNG+/2tFi4DFUotsN1ZCbe+a+Qz8KPlkRYcuglyCL3pRFlSVdcE0dyKEX4fIuXJr7GQIy+oKoGg=
X-Received: by 2002:a02:a392:: with SMTP id y18mr1402321jak.112.1591274228169; 
 Thu, 04 Jun 2020 05:37:08 -0700 (PDT)
MIME-Version: 1.0
References: <20200520091801.30163-1-andrey.vesnovaty@gmail.com>
 <CALBAE1Os+GG7UTRYVtiqcaHERPvaM6Ge+HeBF-fj=oM6STdPGg@mail.gmail.com>
 <CAOwx9StPnWCvCeiw+qtJg0oBWsCF2Wor50QgMCsTZuDjiEh70Q@mail.gmail.com>
In-Reply-To: <CAOwx9StPnWCvCeiw+qtJg0oBWsCF2Wor50QgMCsTZuDjiEh70Q@mail.gmail.com>
From: Jerin Jacob <jerinjacobk@gmail.com>
Date: Thu, 4 Jun 2020 18:06:51 +0530
Message-ID: <CALBAE1M0WvahU1O7p9BFS_uc-x+zPeroYpzfgJkyZsy9esU=Yg@mail.gmail.com>
To: Andrey Vesnovaty <andrey.vesnovaty@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>,
 Ferruh Yigit <ferruh.yigit@intel.com>, 
 Andrew Rybchenko <arybchenko@solarflare.com>, Ori Kam <orika@mellanox.com>,
 dpdk-dev <dev@dpdk.org>, Ziyang Xuan <xuanziyang2@huawei.com>,
 Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>, 
 Guoyang Zhou <zhouguoyang@huawei.com>, Rosen Xu <rosen.xu@intel.com>, 
 Beilei Xing <beilei.xing@intel.com>, jia.guo@intel.com,
 Rasesh Mody <rmody@marvell.com>, Shahed Shaikh <shshaikh@marvell.com>,
 Nithin Dabilpuram <ndabilpuram@marvell.com>, 
 Kiran Kumar K <kirankumark@marvell.com>, Qiming Yang <qiming.yang@intel.com>, 
 Qi Zhang <qi.z.zhang@intel.com>, "Wiles, Keith" <keith.wiles@intel.com>, 
 Hemant Agrawal <hemant.agrawal@nxp.com>, Sachin Saxena <sachin.saxena@nxp.com>,
 wei.zhao1@intel.com, 
 John Daley <johndale@cisco.com>, Hyong Youb Kim <hyonkim@cisco.com>,
 Chas Williams <chas3@att.com>, 
 Matan Azrad <matan@mellanox.com>, Shahaf Shuler <shahafs@mellanox.com>, 
 Slava Ovsiienko <viacheslavo@mellanox.com>,
 Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>, 
 Gaetan Rivet <grive@u256.net>, Liron Himi <lironh@marvell.com>,
 Jingjing Wu <jingjing.wu@intel.com>, 
 "Wei Hu (Xavier" <xavier.huwei@huawei.com>,
 "Min Hu (Connor" <humin29@huawei.com>, 
 Yisen Zhuang <yisen.zhuang@huawei.com>,
 Ajit Khaparde <ajit.khaparde@broadcom.com>, 
 Somnath Kotur <somnath.kotur@broadcom.com>,
 Jasvinder Singh <jasvinder.singh@intel.com>, 
 Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [RFC] add flow action context API
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

I would suggest adding rte_flow driver implementers if there is a
change in rte_flow_ops in RFC so that
your patch will get enough. I added the maintainers of rte_flow PMD[1]
implementers in Cc.


>>
>> Would the following additional API suffice the motivation?
>>
>> rte_flow_modify_action(struct rte_flow * flow,  const struct
>> rte_flow_action actions[])
>
>
> This API limits the scope to single flow which isn't the goal for the proposed change.

Yes. But we need to find the balance between HW features(driver
interface) and public API?
Is Mellanox HW has the support for native shared HW ACTION context?
if so, it makes sense to have such a fat public API as you suggested.
If not, it is a matter of application to iterate over needed flows to
modify action through rte_flow_modify_action().

Assume Mellanox HW has native HW ACTION context and the majority of
the other HW[1] does not
have, then IMO, We should have common code, to handle in this complex
state machine
implementation of action_ctx_create, action_ctx_destroy,
rte_flow_action_ctx_modify, action_ctx_query
in case PMD does not support it. (If PMD/HW supports it then it can
use native implementation)

Reason:
1) I think, All the HW will have the the option to update the ACTION
for the given flow.
octeontx2 has it. If not, let's discuss what is typical HW abstraction
ACTION only update.
2) This case can be implemented if PMD just has flow_modify_action() support.
Multiple flows will be matter will be iterating over all registered flow.
3) Avoid code duplication on all of the below PMDs[1]



[1]
drivers/net/hinic/hinic_pmd_flow.c:const struct rte_flow_ops hinic_flow_ops = {
drivers/net/ipn3ke/ipn3ke_flow.h:extern const struct rte_flow_ops
ipn3ke_flow_ops;
drivers/net/i40e/i40e_flow.c:const struct rte_flow_ops i40e_flow_ops = {
drivers/net/qede/qede_filter.c:const struct rte_flow_ops qede_flow_ops = {
drivers/net/octeontx2/otx2_flow.h:extern const struct rte_flow_ops
otx2_flow_ops;
drivers/net/ice/ice_generic_flow.h:extern const struct rte_flow_ops
ice_flow_ops;
drivers/net/tap/tap_flow.c:static const struct rte_flow_ops tap_flow_ops = {
drivers/net/dpaa2/dpaa2_ethdev.h:extern const struct rte_flow_ops
dpaa2_flow_ops;
drivers/net/e1000/e1000_ethdev.h:extern const struct rte_flow_ops igb_flow_ops;
drivers/net/enic/enic_flow.c:const struct rte_flow_ops enic_flow_ops = {
drivers/net/bonding/rte_eth_bond_flow.c:const struct rte_flow_ops
bond_flow_ops = {
drivers/net/mlx5/mlx5_flow.c:static const struct rte_flow_ops mlx5_flow_ops = {
drivers/net/igc/igc_flow.h:extern const struct rte_flow_ops igc_flow_ops;
drivers/net/cxgbe/cxgbe_flow.c:static const struct rte_flow_ops
cxgbe_flow_ops = {
drivers/net/failsafe/failsafe_private.h:extern const struct
rte_flow_ops fs_flow_ops;
drivers/net/mvpp2/mrvl_flow.c:const struct rte_flow_ops mrvl_flow_ops = {
drivers/net/iavf/iavf_generic_flow.c:const struct rte_flow_ops iavf_flow_ops = {
drivers/net/hns3/hns3_flow.c:static const struct rte_flow_ops hns3_flow_ops = {
drivers/net/bnxt/bnxt_flow.c:const struct rte_flow_ops bnxt_flow_ops = {
drivers/net/mlx4/mlx4_flow.c:static const struct rte_flow_ops mlx4_flow_ops = {
drivers/net/sfc/sfc_flow.c:const struct rte_flow_ops sfc_flow_ops = {
drivers/net/softnic/rte_eth_softnic_flow.c:const struct rte_flow_ops
pmd_flow_ops = {
drivers/net/ixgbe/ixgbe_ethdev.h:extern const struct rte_flow_ops
ixgbe_flow_ops;