From: Jerin Jacob <jerinjacobk@gmail.com>
To: Andrey Vesnovaty <andrey.vesnovaty@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>,
Ferruh Yigit <ferruh.yigit@intel.com>,
Andrew Rybchenko <arybchenko@solarflare.com>,
Ori Kam <orika@mellanox.com>, dpdk-dev <dev@dpdk.org>,
Ziyang Xuan <xuanziyang2@huawei.com>,
Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
Guoyang Zhou <zhouguoyang@huawei.com>,
Rosen Xu <rosen.xu@intel.com>,
Beilei Xing <beilei.xing@intel.com>,
jia.guo@intel.com, Rasesh Mody <rmody@marvell.com>,
Shahed Shaikh <shshaikh@marvell.com>,
Nithin Dabilpuram <ndabilpuram@marvell.com>,
Kiran Kumar K <kirankumark@marvell.com>,
Qiming Yang <qiming.yang@intel.com>,
Qi Zhang <qi.z.zhang@intel.com>,
"Wiles, Keith" <keith.wiles@intel.com>,
Hemant Agrawal <hemant.agrawal@nxp.com>,
Sachin Saxena <sachin.saxena@nxp.com>,
wei.zhao1@intel.com, John Daley <johndale@cisco.com>,
Hyong Youb Kim <hyonkim@cisco.com>, Chas Williams <chas3@att.com>,
Matan Azrad <matan@mellanox.com>,
Shahaf Shuler <shahafs@mellanox.com>,
Slava Ovsiienko <viacheslavo@mellanox.com>,
Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
Gaetan Rivet <grive@u256.net>, Liron Himi <lironh@marvell.com>,
Jingjing Wu <jingjing.wu@intel.com>,
"Wei Hu (Xavier" <xavier.huwei@huawei.com>,
"Min Hu (Connor" <humin29@huawei.com>,
Yisen Zhuang <yisen.zhuang@huawei.com>,
Ajit Khaparde <ajit.khaparde@broadcom.com>,
Somnath Kotur <somnath.kotur@broadcom.com>,
Jasvinder Singh <jasvinder.singh@intel.com>,
Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Subject: Re: [dpdk-dev] [RFC] add flow action context API
Date: Tue, 9 Jun 2020 21:31:25 +0530 [thread overview]
Message-ID: <CALBAE1M+0CN2k=+bzXN7n4S=XYwFa2am3rWn84onteV6e=3y2Q@mail.gmail.com> (raw)
In-Reply-To: <CAOwx9SsW8edovd6LS-dhckjjs1mgmrqb81A4tTrPOnsa8_cVLg@mail.gmail.com>
On Thu, Jun 4, 2020 at 9:27 PM Andrey Vesnovaty
<andrey.vesnovaty@gmail.com> wrote:
>
> On Thu, Jun 4, 2020 at 3:37 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>
>> I would suggest adding rte_flow driver implementers if there is a
>> change in rte_flow_ops in RFC so that
>> your patch will get enough. I added the maintainers of rte_flow PMD[1]
>> implementers in Cc.
>>
>>
>> >>
>> >> Would the following additional API suffice the motivation?
>> >>
>> >> rte_flow_modify_action(struct rte_flow * flow, const struct
>> >> rte_flow_action actions[])
>> >
>> >
>> > This API limits the scope to single flow which isn't the goal for the proposed change.
>>
>> Yes. But we need to find the balance between HW features(driver
>> interface) and public API?
>> Is Mellanox HW has the support for native shared HW ACTION context?
>
>
> Yes, I'm working on a shared context for RSS action, patched will be available in a couple of weeks.
> Other candidates for this kind of API extension are counters/meters.
OK.
>
>>
>> if so, it makes sense to have such a fat public API as you suggested.
>> If not, it is a matter of application to iterate over needed flows to
>> modify action through rte_flow_modify_action().
>>
>> Assume Mellanox HW has native HW ACTION context and the majority of
>> the other HW[1] does not
>> have, then IMO, We should have common code, to handle in this complex
>> state machine
>> implementation of action_ctx_create, action_ctx_destroy,
>> rte_flow_action_ctx_modify, action_ctx_query
>> in case PMD does not support it. (If PMD/HW supports it then it can
>> use native implementation)
>
>
> Does it mean that all PMDs will support action-ctx create/destroy/update for all types of actions?
That can be based on following _driver_ op interface return:
flow_modify_action(struct rte_flow * flow, const struct
rte_flow_action actions[])
>
>> Reason:
>> 1) I think, All the HW will have the the option to update the ACTION
>> for the given flow.
>> octeontx2 has it. If not, let's discuss what is typical HW abstraction
>> ACTION only update.
>
>
>>
>> 2) This case can be implemented if PMD just has flow_modify_action() support.
>> Multiple flows will be matter will be iterating over all registered flow.
>
>
> General case won't be just iteration over all flows but:
> 1. destroy flow
> 2. create "modified" action
> 3. create flow with the action from (2)
> Is this what you mean by "common code" to handle action-ctx create/destroy/update implementation?
Yes. Not sure why to destroy the flow if the only action is getting updated.
(2) and (3) driver op can be just implemented over,
flow_modify_action(struct rte_flow * flow, const struct
rte_flow_action actions[])
>
>>
>> 3) Avoid code duplication on all of the below PMDs[1]
>>
>>
>>
>> [1]
>> drivers/net/hinic/hinic_pmd_flow.c:const struct rte_flow_ops hinic_flow_ops = {
>> drivers/net/ipn3ke/ipn3ke_flow.h:extern const struct rte_flow_ops
>> ipn3ke_flow_ops;
>> drivers/net/i40e/i40e_flow.c:const struct rte_flow_ops i40e_flow_ops = {
>> drivers/net/qede/qede_filter.c:const struct rte_flow_ops qede_flow_ops = {
>> drivers/net/octeontx2/otx2_flow.h:extern const struct rte_flow_ops
>> otx2_flow_ops;
>> drivers/net/ice/ice_generic_flow.h:extern const struct rte_flow_ops
>> ice_flow_ops;
>> drivers/net/tap/tap_flow.c:static const struct rte_flow_ops tap_flow_ops = {
>> drivers/net/dpaa2/dpaa2_ethdev.h:extern const struct rte_flow_ops
>> dpaa2_flow_ops;
>> drivers/net/e1000/e1000_ethdev.h:extern const struct rte_flow_ops igb_flow_ops;
>> drivers/net/enic/enic_flow.c:const struct rte_flow_ops enic_flow_ops = {
>> drivers/net/bonding/rte_eth_bond_flow.c:const struct rte_flow_ops
>> bond_flow_ops = {
>> drivers/net/mlx5/mlx5_flow.c:static const struct rte_flow_ops mlx5_flow_ops = {
>> drivers/net/igc/igc_flow.h:extern const struct rte_flow_ops igc_flow_ops;
>> drivers/net/cxgbe/cxgbe_flow.c:static const struct rte_flow_ops
>> cxgbe_flow_ops = {
>> drivers/net/failsafe/failsafe_private.h:extern const struct
>> rte_flow_ops fs_flow_ops;
>> drivers/net/mvpp2/mrvl_flow.c:const struct rte_flow_ops mrvl_flow_ops = {
>> drivers/net/iavf/iavf_generic_flow.c:const struct rte_flow_ops iavf_flow_ops = {
>> drivers/net/hns3/hns3_flow.c:static const struct rte_flow_ops hns3_flow_ops = {
>> drivers/net/bnxt/bnxt_flow.c:const struct rte_flow_ops bnxt_flow_ops = {
>> drivers/net/mlx4/mlx4_flow.c:static const struct rte_flow_ops mlx4_flow_ops = {
>> drivers/net/sfc/sfc_flow.c:const struct rte_flow_ops sfc_flow_ops = {
>> drivers/net/softnic/rte_eth_softnic_flow.c:const struct rte_flow_ops
>> pmd_flow_ops = {
>> drivers/net/ixgbe/ixgbe_ethdev.h:extern const struct rte_flow_ops
>> ixgbe_flow_ops;
next prev parent reply other threads:[~2020-06-09 16:01 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-20 9:18 Andrey Vesnovaty
2020-06-03 10:02 ` Thomas Monjalon
2020-06-04 11:12 ` Andrey Vesnovaty
2020-06-04 17:23 ` Thomas Monjalon
2020-06-05 8:30 ` Bruce Richardson
2020-06-05 8:33 ` Thomas Monjalon
2020-06-03 10:53 ` Jerin Jacob
2020-06-04 11:25 ` Andrey Vesnovaty
2020-06-04 12:36 ` Jerin Jacob
2020-06-04 15:57 ` Andrey Vesnovaty
2020-06-09 16:01 ` Jerin Jacob [this message]
2020-06-20 13:32 ` [dpdk-dev] [RFC v2 0/1] " Andrey Vesnovaty
2020-06-22 15:22 ` Thomas Monjalon
2020-06-22 17:09 ` Andrey Vesnovaty
2020-06-26 11:44 ` Jerin Jacob
2020-06-28 8:44 ` Andrey Vesnovaty
2020-06-28 13:42 ` Jerin Jacob
2020-06-29 10:22 ` Andrey Vesnovaty
2020-06-30 9:52 ` Jerin Jacob
2020-07-01 9:24 ` Andrey Vesnovaty
2020-07-01 10:34 ` Jerin Jacob
2020-06-20 13:32 ` [dpdk-dev] [RFC v2 1/1] add flow shared action API Andrey Vesnovaty
2020-07-02 0:24 ` Stephen Hemminger
2020-07-02 7:20 ` Ori Kam
2020-07-02 8:06 ` Andrey Vesnovaty
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALBAE1M+0CN2k=+bzXN7n4S=XYwFa2am3rWn84onteV6e=3y2Q@mail.gmail.com' \
--to=jerinjacobk@gmail.com \
--cc=ajit.khaparde@broadcom.com \
--cc=andrey.vesnovaty@gmail.com \
--cc=arybchenko@solarflare.com \
--cc=beilei.xing@intel.com \
--cc=chas3@att.com \
--cc=cloud.wangxiaoyun@huawei.com \
--cc=cristian.dumitrescu@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=grive@u256.net \
--cc=hemant.agrawal@nxp.com \
--cc=humin29@huawei.com \
--cc=hyonkim@cisco.com \
--cc=jasvinder.singh@intel.com \
--cc=jia.guo@intel.com \
--cc=jingjing.wu@intel.com \
--cc=johndale@cisco.com \
--cc=keith.wiles@intel.com \
--cc=kirankumark@marvell.com \
--cc=lironh@marvell.com \
--cc=matan@mellanox.com \
--cc=ndabilpuram@marvell.com \
--cc=orika@mellanox.com \
--cc=qi.z.zhang@intel.com \
--cc=qiming.yang@intel.com \
--cc=rahul.lakkireddy@chelsio.com \
--cc=rmody@marvell.com \
--cc=rosen.xu@intel.com \
--cc=sachin.saxena@nxp.com \
--cc=shahafs@mellanox.com \
--cc=shshaikh@marvell.com \
--cc=somnath.kotur@broadcom.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@mellanox.com \
--cc=wei.zhao1@intel.com \
--cc=xavier.huwei@huawei.com \
--cc=xuanziyang2@huawei.com \
--cc=yisen.zhuang@huawei.com \
--cc=zhouguoyang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).