* [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap
@ 2018-04-26 12:08 Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
` (5 more replies)
0 siblings, 6 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 12:08 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
his patchset contains the revised proposal to manage
tunnel endpoints hardware accleration based on community
feedback on RFC
(http://dpdk.org/ml/archives/dev/2017-December/084676.html). This
proposal is purely enabled through rte_flow APIs with the
additions of some new features which were previously implemented
by the proposed rte_tep APIs which were proposed in the original
RFC. This patchset ultimately aims to enable the configuration
of inline data path encapsulation and decapsulation of tunnel
endpoint network overlays on accelerated IO devices.
V2:
- Split new functions into separate patches, and add additional
documentaiton.
V3:
- Extended the description of group counter in documentation.
- Renamed VTEP to TUNNEL.
- Fixed C99 syntax.
V4:
- Modify encap/decap actions to be protocol specific
- rename group action type to jump
- add mark flow item type in place of metadata flow/action types
- add count action data structure
- modify query API to accept rte_flow_action structure in place of
rte_flow_actio_type enumeration to support specification and
querying of multiple actions of the same type
V5:
- Documentation and comment updates
- Mark new API structures as experimental
- squash new function testpmd enablement into relevant patches.
V6:
- rebased to head of next-net
- fixed whitespace issues add in previous revision
The summary of the additions to the rte_flow are as follows:
- Add new flow actions RTE_RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP and
RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP to rte_flow to support
specfication of encapsulation and decapsulation of VXLAN and NVGRE
tunnels in hardware.
- Introduces support for the use of pipeline metadata in
the flow pattern definition and the population of metadata fields
from flow actions using the MARK flow and action items.
- Add shared flag to counters to enable statistics to be kept on
groups offlows such as all ingress/egress flows of a tunnel
- Adds jump_action to allow a flows to be redirected to a group
within the device.
A high level summary of the proposed usage model is as follows:
1. Decapsulation
1.1. Decapsulation of tunnel outer headers and forward all traffic
to the same queue/s or port, would have the follow flows
paramteters, sudo code used here.
struct rte_flow_attr attr = { .ingress = 1 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
{ .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
{ .type = RTE_FLOW_ACTION_TYPE_VF, .conf = &vf_action },
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
1.2.
Decapsulation of tunnel outer headers and matching on inner
headers, and forwarding to the same queue/s or port.
1.2.1.
The same scenario as above but either the application
or hardware requires configuration as 2 logically independent
operations (viewing it as 2 logical tables). The first stage
being the flow rule to define the pattern to match the tunnel
and the action to decapsulate the packet, and the second stage
stage table matches the inner header and defines the actions,
forward to port etc.
flow rule for outer header on table 0
struct rte_flow_attr attr = { .ingress = 1, .table = 0 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
{ .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_item_count shared_couter = {
.shared = 1,
.id = {counter_id}
}
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &shared_counter },
{ .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &mark_action },
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
{
.type = RTE_FLOW_ACTION_TYPE_JUMP,
.conf = { .group = 1 }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
flow rule for inner header on table 1
struct rte_flow_attr attr = { .ingress = 1, .group = 1 };
struct rte_flow_item_mark mark_item = { id = {mark_id} };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_MARK, .spec = &mark_item },
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action actions[] = {
{
.type = RTE_FLOW_ACTION_TYPE_PORT_ID,
.conf = &port_id_action = { port_id }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
Note that the mark action in the flow rule in group 0 is generating
the value in the pipeline which is then used in as part as the flow
pattern in group 1 to specify the exact flow to match against. In the
case where exact match rules are being provided by the application
explicitly then the MARK item value can be provided by the application
in the flow pattern for the flow rule in group 1 also.
2. Encapsulation
Encapsulation of all traffic matching a specific flow pattern to a
specified tunnel and egressing to a particular port.
struct rte_flow_attr attr = { .egress = 1 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action_tunnel_encap vxlan_encap_action = {
.definition = {
{ .type=eth, .spec={}, .mask={} },
{ .type=ipv4, .spec={}, .mask={} },
{ .type=udp, .spec={}, .mask={} },
{ .type=vxlan, .spec={}, .mask={} }
{ .type=end }
}
};
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &count } },
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, .conf = &vxlan_encap_action } },
{
.type = RTE_FLOW_ACTION_TYPE_PORT_ID,
.conf = &port_id_action = { port_id }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
};
Declan Doherty (4):
ethdev: Add tunnel encap/decap actions
ethdev: Add group JUMP action
ethdev: add mark flow item to rte_flow_item_types
ethdev: add shared counter support to rte_flow
app/test-pmd/cmdline_flow.c | 51 +++++-
app/test-pmd/config.c | 15 +-
app/test-pmd/testpmd.h | 2 +-
doc/guides/prog_guide/rte_flow.rst | 257 ++++++++++++++++++++++++----
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 8 +
drivers/net/bonding/rte_eth_bond_flow.c | 9 +-
drivers/net/failsafe/failsafe_flow.c | 4 +-
lib/librte_ether/rte_flow.c | 2 +-
lib/librte_ether/rte_flow.h | 211 +++++++++++++++++++++--
lib/librte_ether/rte_flow_driver.h | 2 +-
10 files changed, 500 insertions(+), 61 deletions(-)
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v6 1/4] ethdev: Add tunnel encap/decap actions
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
@ 2018-04-26 12:08 ` Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 2/4] ethdev: Add group JUMP action Declan Doherty
` (4 subsequent siblings)
5 siblings, 0 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 12:08 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add new flow action types and associated action data structures to
support the encapsulation and decapsulation of VXLAN and NVGRE tunnel
endpoints.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP action will cause the
matching flow to be encapsulated in the tunnel endpoint overlay
defined in the [vxlan/nvgre]_encap action data.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP action will cause all
headers associated with the outer most tunnel endpoint of the specified
type for the matching flows.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 107 +++++++++++++++++++++++++++++++++++++
lib/librte_ether/rte_flow.h | 103 +++++++++++++++++++++++++++++++++++
2 files changed, 210 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 629cc90a9..a7197cb7e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1909,6 +1909,113 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
| ``ethertype`` | EtherType |
+---------------+-----------+
+Action: ``VXLAN_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a VXLAN encapsulation action by encapsulating the matched flow in the
+VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
+definition.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
+VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
+Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
+over Layer 3 Networks). The pattern must be terminated with the
+RTE_FLOW_ITEM_TYPE_END item type.
+
+.. _table_rte_flow_action_vxlan_encap:
+
+.. table:: VXLAN_ENCAP
+
+ +----------------+-------------------------------------+
+ | Field | Value |
+ +================+=====================================+
+ | ``definition`` | Tunnel end-point overlay definition |
+ +----------------+-------------------------------------+
+
+.. _table_rte_flow_action_vxlan_encap_example:
+
+.. table:: IPv4 VxLAN flow pattern example.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | UDP |
+ +-------+----------+
+ | 3 | VXLAN |
+ +-------+----------+
+ | 4 | END |
+ +-------+----------+
+
+Action: ``VXLAN_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the VXLAN tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
+action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
+flow pattern does not specify a valid VXLAN tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+Action: ``NVGRE_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a NVGRE encapsulation action by encapsulating the matched flow in the
+NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
+definition.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
+NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
+Virtualization Using Generic Routing Encapsulation). The pattern must be
+terminated with the RTE_FLOW_ITEM_TYPE_END item type.
+
+.. _table_rte_flow_action_nvgre_encap:
+
+.. table:: NVGRE_ENCAP
+
+ +----------------+-------------------------------------+
+ | Field | Value |
+ +================+=====================================+
+ | ``definition`` | NVGRE end-point overlay definition |
+ +----------------+-------------------------------------+
+
+.. _table_rte_flow_action_nvgre_encap_example:
+
+.. table:: IPv4 NVGRE flow pattern example.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | NVGRE |
+ +-------+----------+
+ | 3 | END |
+ +-------+----------+
+
+Action: ``NVGRE_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the NVGRE tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
+action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
+flow pattern does not specify a valid NVGRE tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
Negative types
~~~~~~~~~~~~~~
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index f70056fbd..657cb9a99 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1431,6 +1431,40 @@ enum rte_flow_action_type {
* See struct rte_flow_action_of_push_mpls.
*/
RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
+
+ /**
+ * Encapsulate flow in VXLAN tunnel as defined in
+ * rte_flow_action_vxlan_encap action structure.
+ *
+ * See struct rte_flow_action_vxlan_encap.
+ */
+ RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
+
+ /**
+ * Decapsulate outer most VXLAN tunnel from matched flow.
+ *
+ * If flow pattern does not define a valid VXLAN tunnel (as specified by
+ * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
+ * error.
+ */
+ RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
+
+ /**
+ * Encapsulate flow in NVGRE tunnel defined in the
+ * rte_flow_action_nvgre_encap action structure.
+ *
+ * See struct rte_flow_action_nvgre_encap.
+ */
+ RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
+
+ /**
+ * Decapsulate outer most NVGRE tunnel from matched flow.
+ *
+ * If flow pattern does not define a valid NVGRE tunnel (as specified by
+ * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
+ * error.
+ */
+ RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
};
/**
@@ -1678,6 +1712,75 @@ struct rte_flow_action_of_push_mpls {
};
/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
+ *
+ * VXLAN tunnel end-point encapsulation data definition
+ *
+ * The tunnel definition is provided through the flow item pattern, the
+ * provided pattern must conform to RFC7348 for the tunnel specified. The flow
+ * definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
+ * definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
+ *
+ * The mask field allows user to specify which fields in the flow item
+ * definitions can be ignored and which have valid data and can be used
+ * verbatim.
+ *
+ * Note: the last field is not used in the definition of a tunnel and can be
+ * ignored.
+ *
+ * Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
+ *
+ * - ETH / IPV4 / UDP / VXLAN / END
+ * - ETH / IPV6 / UDP / VXLAN / END
+ * - ETH / VLAN / IPV4 / UDP / VXLAN / END
+ *
+ */
+struct rte_flow_action_vxlan_encap {
+ /**
+ * Encapsulating vxlan tunnel definition
+ * (terminated by the END pattern item).
+ */
+ struct rte_flow_item *definition;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP
+ *
+ * NVGRE tunnel end-point encapsulation data definition
+ *
+ * The tunnel definition is provided through the flow item pattern the
+ * provided pattern must conform with RFC7637. The flow definition must be
+ * provided in order from the RTE_FLOW_ITEM_TYPE_ETH definition up the end item
+ * which is specified by RTE_FLOW_ITEM_TYPE_END.
+ *
+ * The mask field allows user to specify which fields in the flow item
+ * definitions can be ignored and which have valid data and can be used
+ * verbatim.
+ *
+ * Note: the last field is not used in the definition of a tunnel and can be
+ * ignored.
+ *
+ * Valid flow definition for RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP include:
+ *
+ * - ETH / IPV4 / NVGRE / END
+ * - ETH / VLAN / IPV6 / NVGRE / END
+ *
+ */
+struct rte_flow_action_nvgre_encap {
+ /**
+ * Encapsulating vxlan tunnel definition
+ * (terminated by the END pattern item).
+ */
+ struct rte_flow_item *definition;
+};
+
+/*
* Definition of a single action.
*
* A list of actions is terminated by a END action.
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v6 2/4] ethdev: Add group JUMP action
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
@ 2018-04-26 12:08 ` Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
` (3 subsequent siblings)
5 siblings, 0 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 12:08 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add jump action type which defines an action which allows a matched
flow to be redirect to the specified group. This allows physical and
logical flow table/group hierarchies to be defined through rte_flow.
This breaks ABI compatibility for the following public functions (as it
modifes the ordering of the rte_flow_action_type enumeration):
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Add support for specification of new JUMP action to testpmd's flow
cli, and update the testpmd documentation to describe this new
action.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 23 +++++++++++
doc/guides/prog_guide/rte_flow.rst | 61 ++++++++++++++++++++++++-----
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++
lib/librte_ether/rte_flow.h | 41 +++++++++++++++----
4 files changed, 112 insertions(+), 17 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4239602b6..6e9fa5d7c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -183,6 +183,8 @@ enum index {
ACTION_END,
ACTION_VOID,
ACTION_PASSTHRU,
+ ACTION_JUMP,
+ ACTION_JUMP_GROUP,
ACTION_MARK,
ACTION_MARK_ID,
ACTION_FLAG,
@@ -738,6 +740,7 @@ static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
ACTION_PASSTHRU,
+ ACTION_JUMP,
ACTION_MARK,
ACTION_FLAG,
ACTION_QUEUE,
@@ -856,6 +859,12 @@ static const enum index action_of_push_mpls[] = {
ZERO,
};
+static const enum index action_jump[] = {
+ ACTION_JUMP_GROUP,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_init(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1931,6 +1940,20 @@ static const struct token token_list[] = {
.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
.call = parse_vc,
},
+ [ACTION_JUMP] = {
+ .name = "jump",
+ .help = "redirect traffic to a given group",
+ .priv = PRIV_ACTION(JUMP, sizeof(struct rte_flow_action_jump)),
+ .next = NEXT(action_jump),
+ .call = parse_vc,
+ },
+ [ACTION_JUMP_GROUP] = {
+ .name = "group",
+ .help = "group to redirect traffic to",
+ .next = NEXT(action_jump, NEXT_ENTRY(UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_jump, group)),
+ .call = parse_vc_conf,
+ },
[ACTION_MARK] = {
.name = "mark",
.help = "attach 32 bit value to packets",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index a7197cb7e..1102fae09 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -90,8 +90,12 @@ Thus predictable results for a given priority level can only be achieved
with non-overlapping rules, using perfect matching on all protocol layers.
Flow rules can also be grouped, the flow rule priority is specific to the
-group they belong to. All flow rules in a given group are thus processed
-either before or after another group.
+group they belong to. All flow rules in a given group are thus processed within
+the context of that group. Groups are not linked by default, so the logical
+hierarchy of groups must be explicitly defined by flow rules themselves in each
+group using the JUMP action to define the next group to redirect too. Only flow
+rules defined in the default group 0 are guarantee to be matched against, this
+makes group 0 the origin of any group hierarchy defined by an application.
Support for multiple actions per rule may be implemented internally on top
of non-default hardware priorities, as a result both features may not be
@@ -138,29 +142,34 @@ Attributes
Attribute: Group
^^^^^^^^^^^^^^^^
-Flow rules can be grouped by assigning them a common group number. Lower
-values have higher priority. Group 0 has the highest priority.
+Flow rules can be grouped by assigning them a common group number. Groups
+allow a logical hierarchy of flow rule groups (tables) to be defined. These
+groups can be supported virtually in the PMD or in the physical device.
+Group 0 is the default group and this is the only group which flows are
+guarantee to matched against, all subsequent groups can only be reached by
+way of the JUMP action from a matched flow rule.
Although optional, applications are encouraged to group similar rules as
much as possible to fully take advantage of hardware capabilities
(e.g. optimized matching) and work around limitations (e.g. a single pattern
-type possibly allowed in a given group).
+type possibly allowed in a given group), while being aware that the groups
+hierarchies must be programmed explicitly.
Note that support for more than a single group is not guaranteed.
Attribute: Priority
^^^^^^^^^^^^^^^^^^^
-A priority level can be assigned to a flow rule. Like groups, lower values
+A priority level can be assigned to a flow rule, lower values
denote higher priority, with 0 as the maximum.
-A rule with priority 0 in group 8 is always matched after a rule with
-priority 8 in group 0.
-
-Group and priority levels are arbitrary and up to the application, they do
+Priority levels are arbitrary and up to the application, they do
not need to be contiguous nor start from 0, however the maximum number
varies between devices and may be affected by existing flow rules.
+A flow which matches multiple rules in the same group will always matched by
+the rule with the highest priority in that group.
+
If a packet is matched by several rules of a given group for a given
priority level, the outcome is undefined. It can take any path, may be
duplicated or even cause unrecoverable errors.
@@ -1372,6 +1381,38 @@ flow rules:
| 2 | END |
+-------+----------------------------+
+Action: ``JUMP``
+^^^^^^^^^^^^^^^^
+
+Redirects packets to a group on the current device.
+
+In a hierarchy of groups, which can be used to represent physical or logical
+flow group/tables on the device, this action redirects the matched flow to
+the specified group on that device.
+
+If a matched flow is redirected to a table which doesn't contain a matching
+rule for that flow then the behavior is undefined and the resulting behavior
+is up to the specific device. Best practice when using groups would be define
+a default flow rule for each group which a defines the default actions in that
+group so a consistent behavior is defined.
+
+Defining an action for matched flow in a group to jump to a group which is
+higher in the group hierarchy may not be supported by physical devices,
+depending on how groups are mapped to the physical devices. In the
+definitions of jump actions, applications should be aware that it may be
+possible to define flow rules which trigger an undefined behavior causing
+flows to loop between groups.
+
+.. _table_rte_flow_action_jump:
+
+.. table:: JUMP
+
+ +-----------+------------------------------+
+ | Field | Value |
+ +===========+==============================+
+ | ``group`` | Group to redirect packets to |
+ +-----------+------------------------------+
+
Action: ``MARK``
^^^^^^^^^^^^^^^^
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2edf96dd6..260d044d5 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3453,6 +3453,10 @@ This section lists supported actions and their attributes, if any.
- ``passthru``: let subsequent rule process matched packets.
+- ``jump``: redirect traffic to group on device.
+
+ - ``group {unsigned}``: group to redirect to.
+
- ``mark``: attach 32 bit value to packets.
- ``id {unsigned}``: 32 bit value to return with packets.
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 657cb9a99..17c1c4a89 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -35,18 +35,20 @@ extern "C" {
/**
* Flow rule attributes.
*
- * Priorities are set on two levels: per group and per rule within groups.
+ * Priorities are set on a per rule based within groups.
*
- * Lower values denote higher priority, the highest priority for both levels
- * is 0, so that a rule with priority 0 in group 8 is always matched after a
- * rule with priority 8 in group 0.
+ * Lower values denote higher priority, the highest priority for a flow rule
+ * is 0, so that a flow that matches for than one rule, the rule with the
+ * lowest priority value will always be matched.
*
* Although optional, applications are encouraged to group similar rules as
* much as possible to fully take advantage of hardware capabilities
* (e.g. optimized matching) and work around limitations (e.g. a single
- * pattern type possibly allowed in a given group).
+ * pattern type possibly allowed in a given group). Applications should be
+ * aware that groups are not linked by default, and that they must be
+ * explicitly linked by the application using the JUMP action.
*
- * Group and priority levels are arbitrary and up to the application, they
+ * Priority levels are arbitrary and up to the application, they
* do not need to be contiguous nor start from 0, however the maximum number
* varies between devices and may be affected by existing flow rules.
*
@@ -69,7 +71,7 @@ extern "C" {
*/
struct rte_flow_attr {
uint32_t group; /**< Priority group. */
- uint32_t priority; /**< Priority level within group. */
+ uint32_t priority; /**< Rule priority level within group. */
uint32_t ingress:1; /**< Rule applies to ingress traffic. */
uint32_t egress:1; /**< Rule applies to egress traffic. */
/**
@@ -1236,6 +1238,15 @@ enum rte_flow_action_type {
*/
RTE_FLOW_ACTION_TYPE_PASSTHRU,
+ /**
+ * RTE_FLOW_ACTION_TYPE_JUMP
+ *
+ * Redirects packets to a group on the current device.
+ *
+ * See struct rte_flow_action_jump.
+ */
+ RTE_FLOW_ACTION_TYPE_JUMP,
+
/**
* Attaches an integer value to packets and sets PKT_RX_FDIR and
* PKT_RX_FDIR_ID mbuf flags.
@@ -1481,6 +1492,22 @@ struct rte_flow_action_mark {
uint32_t id; /**< Integer value to return with packets. */
};
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_JUMP
+ *
+ * Redirects packets to a group on the current device.
+ *
+ * In a hierarchy of groups, which can be used to represent physical or logical
+ * flow tables on the device, this action allows the action to be a redirect to
+ * a group on that device.
+ */
+struct rte_flow_action_jump {
+ uint32_t group;
+};
+
/**
* RTE_FLOW_ACTION_TYPE_QUEUE
*
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v6 3/4] ethdev: add mark flow item to rte_flow_item_types
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 2/4] ethdev: Add group JUMP action Declan Doherty
@ 2018-04-26 12:08 ` Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
` (2 subsequent siblings)
5 siblings, 0 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 12:08 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Introduces a new action type RTE_FLOW_ITEM_TYPE_MARK which enables
flow patterns to specify arbitrary integer values to match aginst
set by the RTE_FLOW_ACTION_TYPE_MARK action in previously matched
flows.
Add support for specification of new MARK flow item in testpmd's cli.
Update testpmd documentation to describe new MARK flow item support.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 22 +++++++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 30 +++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++++
lib/librte_ether/rte_flow.h | 29 ++++++++++++++++++++++++++++
4 files changed, 85 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 6e9fa5d7c..1ac04a0ab 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -91,6 +91,8 @@ enum index {
ITEM_PHY_PORT_INDEX,
ITEM_PORT_ID,
ITEM_PORT_ID_ID,
+ ITEM_MARK,
+ ITEM_MARK_ID,
ITEM_RAW,
ITEM_RAW_RELATIVE,
ITEM_RAW_SEARCH,
@@ -494,6 +496,7 @@ static const enum index next_item[] = {
ITEM_VF,
ITEM_PHY_PORT,
ITEM_PORT_ID,
+ ITEM_MARK,
ITEM_RAW,
ITEM_ETH,
ITEM_VLAN,
@@ -555,6 +558,12 @@ static const enum index item_port_id[] = {
ZERO,
};
+static const enum index item_mark[] = {
+ ITEM_MARK_ID,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index item_raw[] = {
ITEM_RAW_RELATIVE,
ITEM_RAW_SEARCH,
@@ -1289,6 +1298,19 @@ static const struct token token_list[] = {
.next = NEXT(item_port_id, NEXT_ENTRY(UNSIGNED), item_param),
.args = ARGS(ARGS_ENTRY(struct rte_flow_item_port_id, id)),
},
+ [ITEM_MARK] = {
+ .name = "mark",
+ .help = "match traffic against value set in previously matched rule",
+ .priv = PRIV_ITEM(MARK, sizeof(struct rte_flow_item_mark)),
+ .next = NEXT(item_mark),
+ .call = parse_vc,
+ },
+ [ITEM_MARK_ID] = {
+ .name = "id",
+ .help = "Integer value to match against",
+ .next = NEXT(item_mark, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_mark, id)),
+ },
[ITEM_RAW] = {
.name = "raw",
.help = "match an arbitrary byte string",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 1102fae09..301f8762e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -656,6 +656,36 @@ representor" depending on the kind of underlying device).
| ``mask`` | ``id`` | zeroed to match any port ID |
+----------+----------+-----------------------------+
+Item: ``MARK``
+^^^^^^^^^^^^^^
+
+Matches an arbitrary integer value which was set using the ``MARK`` action in
+a previously matched rule.
+
+This item can only specified once as a match criteria as the ``MARK`` action can
+only be specified once in a flow action.
+
+Note the value of MARK field is arbitrary and application defined.
+
+Depending on the underlying implementation the MARK item may be supported on
+the physical device, with virtual groups in the PMD or not at all.
+
+- Default ``mask`` matches any integer value.
+
+.. _table_rte_flow_item_mark:
+
+.. table:: MARK
+
+ +----------+----------+---------------------------+
+ | Field | Subfield | Value |
+ +==========+==========+===========================+
+ | ``spec`` | ``id`` | integer value |
+ +----------+--------------------------------------+
+ | ``last`` | ``id`` | upper range value |
+ +----------+----------+---------------------------+
+ | ``mask`` | ``id`` | zeroed to match any value |
+ +----------+------- --+---------------------------+
+
Data matching item types
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 260d044d5..013a40549 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3240,6 +3240,10 @@ This section lists supported pattern items and their attributes, if any.
- ``id {unsigned}``: DPDK port ID.
+- ``mark``: match value set in previously matched flow rule using the mark action.
+
+ - ``id {unsigned}``: arbitrary integer value.
+
- ``raw``: match an arbitrary byte string.
- ``relative {boolean}``: look for pattern after the previous item.
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 17c1c4a89..d390bbf5a 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -406,6 +406,13 @@ enum rte_flow_item_type {
* See struct rte_flow_item_icmp6_nd_opt_tla_eth.
*/
RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH,
+
+ /**
+ * Matches specified mark field.
+ *
+ * See struct rte_flow_item_mark.
+ */
+ RTE_FLOW_ITEM_TYPE_MARK,
};
/**
@@ -1148,6 +1155,28 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
};
#endif
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_MARK
+ *
+ * Matches an arbitrary integer value which was set using the ``MARK`` action
+ * in a previously matched rule.
+ *
+ * This item can only be specified once as a match criteria as the ``MARK``
+ * action can only be specified once in a flow action.
+ *
+ * This value is arbitrary and application-defined. Maximum allowed value
+ * depends on the underlying implementation.
+ *
+ * Depending on the underlying implementation the MARK item may be supported on
+ * the physical device, with virtual groups in the PMD or not at all.
+ */
+struct rte_flow_item_mark {
+ uint32_t id; /**< Integer value to match against. */
+};
+
/**
* Matching pattern item definition.
*
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
` (2 preceding siblings ...)
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
@ 2018-04-26 12:08 ` Declan Doherty
2018-04-26 14:06 ` Ori Kam
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
2018-04-27 20:18 ` [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Michael Wildt
5 siblings, 1 reply; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 12:08 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add rte_flow_action_count action data structure to enable shared
counters across multiple flows on a single port or across multiple
flows on multiple ports within the same switch domain. Also this enables
multiple count actions to be specified in a single flow action.
This patch also modifies the existing rte_flow_query API to take the
rte_flow_action structure as an input parameter instead of the
rte_flow_action_type enumeration to allow querying a specific action
from a flow rule when multiple actions of the same type are specified.
This patch also contains updates for the bonding and failsafe PMDs and
testpmd application which are affected by this API change.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 6 ++--
app/test-pmd/config.c | 15 +++++----
app/test-pmd/testpmd.h | 2 +-
doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++------------
drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
drivers/net/failsafe/failsafe_flow.c | 4 +--
lib/librte_ether/rte_flow.c | 2 +-
lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
lib/librte_ether/rte_flow_driver.h | 2 +-
9 files changed, 93 insertions(+), 44 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1ac04a0ab..5754e7858 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -420,7 +420,7 @@ struct buffer {
} destroy; /**< Destroy arguments. */
struct {
uint32_t rule;
- enum rte_flow_action_type action;
+ struct rte_flow_action action;
} query; /**< Query arguments. */
struct {
uint32_t *group;
@@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
.next = NEXT(NEXT_ENTRY(QUERY_ACTION),
NEXT_ENTRY(RULE_ID),
NEXT_ENTRY(PORT_ID)),
- .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
+ .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action.type),
ARGS_ENTRY(struct buffer, args.query.rule),
ARGS_ENTRY(struct buffer, port)),
.call = parse_query,
@@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
break;
case QUERY:
port_flow_query(in->port, in->args.query.rule,
- in->args.query.action);
+ &in->args.query.action);
break;
case LIST:
port_flow_list(in->port, in->args.list.group_n,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 0f2425229..cd6102dfc 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
/** Query a flow rule. */
int
port_flow_query(portid_t port_id, uint32_t rule,
- enum rte_flow_action_type action)
+ const struct rte_flow_action *action)
{
struct rte_flow_error error;
struct rte_port *port;
@@ -1474,15 +1474,16 @@ port_flow_query(portid_t port_id, uint32_t rule,
return -ENOENT;
}
if ((unsigned int)action >= RTE_DIM(flow_action) ||
- !flow_action[action].name)
+ !flow_action[action->type].name)
name = "unknown";
else
- name = flow_action[action].name;
- switch (action) {
+ name = flow_action[action->type].name;
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
break;
default:
- printf("Cannot query action type %d (%s)\n", action, name);
+ printf("Cannot query action type %d (%s)\n",
+ action->type, name);
return -ENOTSUP;
}
/* Poisoning to make sure PMDs update it in case of error. */
@@ -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
memset(&query, 0, sizeof(query));
if (rte_flow_query(port_id, pf->flow, action, &query, &error))
return port_flow_complain(&error);
- switch (action) {
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
printf("%s:\n"
" hits_set: %u\n"
@@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
break;
default:
printf("Cannot display result for action type %d (%s)\n",
- action, name);
+ action->type, name);
break;
}
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index a33b525e2..1af87b8f4 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id,
int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
int port_flow_flush(portid_t port_id);
int port_flow_query(portid_t port_id, uint32_t rule,
- enum rte_flow_action_type action);
+ const struct rte_flow_action *action);
void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
int port_flow_isolate(portid_t port_id, int set);
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 301f8762e..88bfc87eb 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1277,17 +1277,19 @@ Actions are performed in list order:
.. table:: Mark, count then redirect
- +-------+--------+-----------+-------+
- | Index | Action | Field | Value |
- +=======+========+===========+=======+
- | 0 | MARK | ``mark`` | 0x2a |
- +-------+--------+-----------+-------+
- | 1 | COUNT |
- +-------+--------+-----------+-------+
- | 2 | QUEUE | ``queue`` | 10 |
- +-------+--------+-----------+-------+
- | 3 | END |
- +-------+----------------------------+
+ +-------+--------+------------+-------+
+ | Index | Action | Field | Value |
+ +=======+========+============+=======+
+ | 0 | MARK | ``mark`` | 0x2a |
+ +-------+--------+------------+-------+
+ | 1 | COUNT | ``shared`` | 0 |
+ | | +------------+-------+
+ | | | ``id`` | 0 |
+ +-------+--------+------------+-------+
+ | 2 | QUEUE | ``queue`` | 10 |
+ +-------+--------+------------+-------+
+ | 3 | END |
+ +-------+-----------------------------+
|
@@ -1516,23 +1518,36 @@ Drop packets.
Action: ``COUNT``
^^^^^^^^^^^^^^^^^
-Enables counters for this rule.
+Adds a counter action to a matched flow.
+
+If more than one count action is specified in a single flow rule, then each
+action must specify a unique id.
-These counters can be retrieved and reset through ``rte_flow_query()``, see
+Counters can be retrieved and reset through ``rte_flow_query()``, see
``struct rte_flow_query_count``.
-- Counters can be retrieved with ``rte_flow_query()``.
-- No configurable properties.
+The shared flag indicates whether the counter is unique to the flow rule the
+action is specified with, or whether it is a shared counter.
+
+For a count action with the shared flag set, then then a global device
+namespace is assumed for the counter id, so that any matched flow rules using
+a count action with the same counter id on the same port will contribute to
+that counter.
+
+For ports within the same switch domain then the counter id namespace extends
+to all ports within that switch domain.
.. _table_rte_flow_action_count:
.. table:: COUNT
- +---------------+
- | Field |
- +===============+
- | no properties |
- +---------------+
+ +------------+---------------------+
+ | Field | Value |
+ +============+=====================+
+ | ``shared`` | shared counter flag |
+ +------------+---------------------+
+ | ``id`` | counter id |
+ +------------+---------------------+
Query structure to retrieve and reset flow rule counters:
@@ -2282,7 +2297,7 @@ definition.
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error);
@@ -2290,7 +2305,7 @@ Arguments:
- ``port_id``: port identifier of Ethernet device.
- ``flow``: flow rule handle to query.
-- ``action``: action type to query.
+- ``action``: action to query, this must match prototype from flow rule.
- ``data``: pointer to storage for the associated query data type.
- ``error``: perform verbose error reporting if not NULL. PMDs initialize
this structure in case of error only.
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 8093c04f5..31e4bcaeb 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
static int
bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
+ const struct rte_flow_action *action,
struct rte_flow_query_count *count,
struct rte_flow_error *err)
{
@@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
rte_memcpy(&slave_count, count, sizeof(slave_count));
for (i = 0; i < internals->slave_count; i++) {
ret = rte_flow_query(internals->slaves[i].port_id,
- flow->flows[i], RTE_FLOW_ACTION_TYPE_COUNT,
+ flow->flows[i], action,
&slave_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
@@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
static int
bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
- enum rte_flow_action_type type, void *arg,
+ const struct rte_flow_action *action, void *arg,
struct rte_flow_error *err)
{
- switch (type) {
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
- return bond_flow_query_count(dev, flow, arg, err);
+ return bond_flow_query_count(dev, flow, action, arg, err);
default:
return rte_flow_error_set(err, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, arg,
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index a97f4075d..bfe42fcee 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
static int
fs_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
- enum rte_flow_action_type type,
+ const struct rte_flow_action *action,
void *arg,
struct rte_flow_error *error)
{
@@ -185,7 +185,7 @@ fs_flow_query(struct rte_eth_dev *dev,
if (sdev != NULL) {
int ret = rte_flow_query(PORT_ID(sdev),
flow->flows[SUB_ID(sdev)],
- type, arg, error);
+ action, arg, error);
if ((ret = fs_err(sdev, ret))) {
fs_unlock(dev, 0);
diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c
index 4f94ac9b5..7947529da 100644
--- a/lib/librte_ether/rte_flow.c
+++ b/lib/librte_ether/rte_flow.c
@@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id,
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error)
{
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index d390bbf5a..f8ba71cdb 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
* These counters can be retrieved and reset through rte_flow_query(),
* see struct rte_flow_query_count.
*
- * No associated configuration structure.
+ * See struct rte_flow_action_count.
*/
RTE_FLOW_ACTION_TYPE_COUNT,
@@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
uint16_t index; /**< Queue index to use. */
};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_COUNT
+ *
+ * Adds a counter action to a matched flow.
+ *
+ * If more than one count action is specified in a single flow rule, then each
+ * action must specify a unique id.
+ *
+ * Counters can be retrieved and reset through ``rte_flow_query()``, see
+ * ``struct rte_flow_query_count``.
+ *
+ * The shared flag indicates whether the counter is unique to the flow rule the
+ * action is specified with, or whether it is a shared counter.
+ *
+ * For a count action with the shared flag set, then then a global device
+ * namespace is assumed for the counter id, so that any matched flow rules using
+ * a count action with the same counter id on the same port will contribute to
+ * that counter.
+ *
+ * For ports within the same switch domain then the counter id namespace extends
+ * to all ports within that switch domain.
+ */
+struct rte_flow_action_count {
+ uint32_t shared:1; /**< Share counter ID with other flow rules. */
+ uint32_t reserved:31; /**< Reserved, must be zero. */
+ uint32_t id; /**< Counter ID. */
+};
+
/**
* RTE_FLOW_ACTION_TYPE_COUNT (query)
*
@@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
* @param flow
* Flow rule handle to query.
* @param action
- * Action type to query.
+ * Action definition as defined in original flow rule.
* @param[in, out] data
* Pointer to storage for the associated query data type.
* @param[out] error
@@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id,
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error);
diff --git a/lib/librte_ether/rte_flow_driver.h b/lib/librte_ether/rte_flow_driver.h
index 3800310ba..1c90c600d 100644
--- a/lib/librte_ether/rte_flow_driver.h
+++ b/lib/librte_ether/rte_flow_driver.h
@@ -88,7 +88,7 @@ struct rte_flow_ops {
int (*query)
(struct rte_eth_dev *,
struct rte_flow *,
- enum rte_flow_action_type,
+ const struct rte_flow_action *,
void *,
struct rte_flow_error *);
/** See rte_flow_isolate(). */
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
@ 2018-04-26 14:06 ` Ori Kam
2018-04-26 14:27 ` Ferruh Yigit
0 siblings, 1 reply; 23+ messages in thread
From: Ori Kam @ 2018-04-26 14:06 UTC (permalink / raw)
To: Declan Doherty, dev, Ori Kam, Matan Azrad; +Cc: Ferruh Yigit
Hi Declan,
You are changing API (port_flow_query) which is in use in
both MLX5 and MLX4 this results in breaking the build.
Best,
Ori
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
> Sent: Thursday, April 26, 2018 3:08 PM
> To: dev@dpdk.org
> Cc: Ferruh Yigit <ferruh.yigit@intel.com>; Declan Doherty
> <declan.doherty@intel.com>
> Subject: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to
> rte_flow
>
> Add rte_flow_action_count action data structure to enable shared
> counters across multiple flows on a single port or across multiple
> flows on multiple ports within the same switch domain. Also this enables
> multiple count actions to be specified in a single flow action.
>
> This patch also modifies the existing rte_flow_query API to take the
> rte_flow_action structure as an input parameter instead of the
> rte_flow_action_type enumeration to allow querying a specific action
> from a flow rule when multiple actions of the same type are specified.
>
> This patch also contains updates for the bonding and failsafe PMDs and
> testpmd application which are affected by this API change.
>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> ---
> app/test-pmd/cmdline_flow.c | 6 ++--
> app/test-pmd/config.c | 15 +++++----
> app/test-pmd/testpmd.h | 2 +-
> doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++------
> ------
> drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
> drivers/net/failsafe/failsafe_flow.c | 4 +--
> lib/librte_ether/rte_flow.c | 2 +-
> lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
> lib/librte_ether/rte_flow_driver.h | 2 +-
> 9 files changed, 93 insertions(+), 44 deletions(-)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 1ac04a0ab..5754e7858 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -420,7 +420,7 @@ struct buffer {
> } destroy; /**< Destroy arguments. */
> struct {
> uint32_t rule;
> - enum rte_flow_action_type action;
> + struct rte_flow_action action;
> } query; /**< Query arguments. */
> struct {
> uint32_t *group;
> @@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
> .next = NEXT(NEXT_ENTRY(QUERY_ACTION),
> NEXT_ENTRY(RULE_ID),
> NEXT_ENTRY(PORT_ID)),
> - .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
> + .args = ARGS(ARGS_ENTRY(struct buffer,
> args.query.action.type),
> ARGS_ENTRY(struct buffer, args.query.rule),
> ARGS_ENTRY(struct buffer, port)),
> .call = parse_query,
> @@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
> break;
> case QUERY:
> port_flow_query(in->port, in->args.query.rule,
> - in->args.query.action);
> + &in->args.query.action);
> break;
> case LIST:
> port_flow_list(in->port, in->args.list.group_n,
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 0f2425229..cd6102dfc 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
> /** Query a flow rule. */
> int
> port_flow_query(portid_t port_id, uint32_t rule,
> - enum rte_flow_action_type action)
> + const struct rte_flow_action *action)
> {
> struct rte_flow_error error;
> struct rte_port *port;
> @@ -1474,15 +1474,16 @@ port_flow_query(portid_t port_id, uint32_t rule,
> return -ENOENT;
> }
> if ((unsigned int)action >= RTE_DIM(flow_action) ||
> - !flow_action[action].name)
> + !flow_action[action->type].name)
> name = "unknown";
> else
> - name = flow_action[action].name;
> - switch (action) {
> + name = flow_action[action->type].name;
> + switch (action->type) {
> case RTE_FLOW_ACTION_TYPE_COUNT:
> break;
> default:
> - printf("Cannot query action type %d (%s)\n", action, name);
> + printf("Cannot query action type %d (%s)\n",
> + action->type, name);
> return -ENOTSUP;
> }
> /* Poisoning to make sure PMDs update it in case of error. */
> @@ -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
> memset(&query, 0, sizeof(query));
> if (rte_flow_query(port_id, pf->flow, action, &query, &error))
> return port_flow_complain(&error);
> - switch (action) {
> + switch (action->type) {
> case RTE_FLOW_ACTION_TYPE_COUNT:
> printf("%s:\n"
> " hits_set: %u\n"
> @@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
> break;
> default:
> printf("Cannot display result for action type %d (%s)\n",
> - action, name);
> + action->type, name);
> break;
> }
> return 0;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index a33b525e2..1af87b8f4 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id,
> int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
> int port_flow_flush(portid_t port_id);
> int port_flow_query(portid_t port_id, uint32_t rule,
> - enum rte_flow_action_type action);
> + const struct rte_flow_action *action);
> void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
> int port_flow_isolate(portid_t port_id, int set);
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 301f8762e..88bfc87eb 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1277,17 +1277,19 @@ Actions are performed in list order:
>
> .. table:: Mark, count then redirect
>
> - +-------+--------+-----------+-------+
> - | Index | Action | Field | Value |
> - +=======+========+===========+=======+
> - | 0 | MARK | ``mark`` | 0x2a |
> - +-------+--------+-----------+-------+
> - | 1 | COUNT |
> - +-------+--------+-----------+-------+
> - | 2 | QUEUE | ``queue`` | 10 |
> - +-------+--------+-----------+-------+
> - | 3 | END |
> - +-------+----------------------------+
> + +-------+--------+------------+-------+
> + | Index | Action | Field | Value |
> + +=======+========+============+=======+
> + | 0 | MARK | ``mark`` | 0x2a |
> + +-------+--------+------------+-------+
> + | 1 | COUNT | ``shared`` | 0 |
> + | | +------------+-------+
> + | | | ``id`` | 0 |
> + +-------+--------+------------+-------+
> + | 2 | QUEUE | ``queue`` | 10 |
> + +-------+--------+------------+-------+
> + | 3 | END |
> + +-------+-----------------------------+
>
> |
>
> @@ -1516,23 +1518,36 @@ Drop packets.
> Action: ``COUNT``
> ^^^^^^^^^^^^^^^^^
>
> -Enables counters for this rule.
> +Adds a counter action to a matched flow.
> +
> +If more than one count action is specified in a single flow rule, then each
> +action must specify a unique id.
>
> -These counters can be retrieved and reset through ``rte_flow_query()``, see
> +Counters can be retrieved and reset through ``rte_flow_query()``, see
> ``struct rte_flow_query_count``.
>
> -- Counters can be retrieved with ``rte_flow_query()``.
> -- No configurable properties.
> +The shared flag indicates whether the counter is unique to the flow rule the
> +action is specified with, or whether it is a shared counter.
> +
> +For a count action with the shared flag set, then then a global device
> +namespace is assumed for the counter id, so that any matched flow rules
> using
> +a count action with the same counter id on the same port will contribute to
> +that counter.
> +
> +For ports within the same switch domain then the counter id namespace
> extends
> +to all ports within that switch domain.
>
> .. _table_rte_flow_action_count:
>
> .. table:: COUNT
>
> - +---------------+
> - | Field |
> - +===============+
> - | no properties |
> - +---------------+
> + +------------+---------------------+
> + | Field | Value |
> + +============+=====================+
> + | ``shared`` | shared counter flag |
> + +------------+---------------------+
> + | ``id`` | counter id |
> + +------------+---------------------+
>
> Query structure to retrieve and reset flow rule counters:
>
> @@ -2282,7 +2297,7 @@ definition.
> int
> rte_flow_query(uint16_t port_id,
> struct rte_flow *flow,
> - enum rte_flow_action_type action,
> + const struct rte_flow_action *action,
> void *data,
> struct rte_flow_error *error);
>
> @@ -2290,7 +2305,7 @@ Arguments:
>
> - ``port_id``: port identifier of Ethernet device.
> - ``flow``: flow rule handle to query.
> -- ``action``: action type to query.
> +- ``action``: action to query, this must match prototype from flow rule.
> - ``data``: pointer to storage for the associated query data type.
> - ``error``: perform verbose error reporting if not NULL. PMDs initialize
> this structure in case of error only.
> diff --git a/drivers/net/bonding/rte_eth_bond_flow.c
> b/drivers/net/bonding/rte_eth_bond_flow.c
> index 8093c04f5..31e4bcaeb 100644
> --- a/drivers/net/bonding/rte_eth_bond_flow.c
> +++ b/drivers/net/bonding/rte_eth_bond_flow.c
> @@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct
> rte_flow_error *err)
>
> static int
> bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
> + const struct rte_flow_action *action,
> struct rte_flow_query_count *count,
> struct rte_flow_error *err)
> {
> @@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev *dev,
> struct rte_flow *flow,
> rte_memcpy(&slave_count, count, sizeof(slave_count));
> for (i = 0; i < internals->slave_count; i++) {
> ret = rte_flow_query(internals->slaves[i].port_id,
> - flow->flows[i],
> RTE_FLOW_ACTION_TYPE_COUNT,
> + flow->flows[i], action,
> &slave_count, err);
> if (unlikely(ret != 0)) {
> RTE_BOND_LOG(ERR, "Failed to query flow on"
> @@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev *dev,
> struct rte_flow *flow,
>
> static int
> bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
> - enum rte_flow_action_type type, void *arg,
> + const struct rte_flow_action *action, void *arg,
> struct rte_flow_error *err)
> {
> - switch (type) {
> + switch (action->type) {
> case RTE_FLOW_ACTION_TYPE_COUNT:
> - return bond_flow_query_count(dev, flow, arg, err);
> + return bond_flow_query_count(dev, flow, action, arg, err);
> default:
> return rte_flow_error_set(err, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> arg,
> diff --git a/drivers/net/failsafe/failsafe_flow.c
> b/drivers/net/failsafe/failsafe_flow.c
> index a97f4075d..bfe42fcee 100644
> --- a/drivers/net/failsafe/failsafe_flow.c
> +++ b/drivers/net/failsafe/failsafe_flow.c
> @@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
> static int
> fs_flow_query(struct rte_eth_dev *dev,
> struct rte_flow *flow,
> - enum rte_flow_action_type type,
> + const struct rte_flow_action *action,
> void *arg,
> struct rte_flow_error *error)
> {
> @@ -185,7 +185,7 @@ fs_flow_query(struct rte_eth_dev *dev,
> if (sdev != NULL) {
> int ret = rte_flow_query(PORT_ID(sdev),
> flow->flows[SUB_ID(sdev)],
> - type, arg, error);
> + action, arg, error);
>
> if ((ret = fs_err(sdev, ret))) {
> fs_unlock(dev, 0);
> diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c
> index 4f94ac9b5..7947529da 100644
> --- a/lib/librte_ether/rte_flow.c
> +++ b/lib/librte_ether/rte_flow.c
> @@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id,
> int
> rte_flow_query(uint16_t port_id,
> struct rte_flow *flow,
> - enum rte_flow_action_type action,
> + const struct rte_flow_action *action,
> void *data,
> struct rte_flow_error *error)
> {
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index d390bbf5a..f8ba71cdb 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
> * These counters can be retrieved and reset through
> rte_flow_query(),
> * see struct rte_flow_query_count.
> *
> - * No associated configuration structure.
> + * See struct rte_flow_action_count.
> */
> RTE_FLOW_ACTION_TYPE_COUNT,
>
> @@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
> uint16_t index; /**< Queue index to use. */
> };
>
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_COUNT
> + *
> + * Adds a counter action to a matched flow.
> + *
> + * If more than one count action is specified in a single flow rule, then each
> + * action must specify a unique id.
> + *
> + * Counters can be retrieved and reset through ``rte_flow_query()``, see
> + * ``struct rte_flow_query_count``.
> + *
> + * The shared flag indicates whether the counter is unique to the flow rule
> the
> + * action is specified with, or whether it is a shared counter.
> + *
> + * For a count action with the shared flag set, then then a global device
> + * namespace is assumed for the counter id, so that any matched flow rules
> using
> + * a count action with the same counter id on the same port will contribute
> to
> + * that counter.
> + *
> + * For ports within the same switch domain then the counter id namespace
> extends
> + * to all ports within that switch domain.
> + */
> +struct rte_flow_action_count {
> + uint32_t shared:1; /**< Share counter ID with other flow rules. */
> + uint32_t reserved:31; /**< Reserved, must be zero. */
> + uint32_t id; /**< Counter ID. */
> +};
> +
> /**
> * RTE_FLOW_ACTION_TYPE_COUNT (query)
> *
> @@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
> * @param flow
> * Flow rule handle to query.
> * @param action
> - * Action type to query.
> + * Action definition as defined in original flow rule.
> * @param[in, out] data
> * Pointer to storage for the associated query data type.
> * @param[out] error
> @@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id,
> int
> rte_flow_query(uint16_t port_id,
> struct rte_flow *flow,
> - enum rte_flow_action_type action,
> + const struct rte_flow_action *action,
> void *data,
> struct rte_flow_error *error);
>
> diff --git a/lib/librte_ether/rte_flow_driver.h
> b/lib/librte_ether/rte_flow_driver.h
> index 3800310ba..1c90c600d 100644
> --- a/lib/librte_ether/rte_flow_driver.h
> +++ b/lib/librte_ether/rte_flow_driver.h
> @@ -88,7 +88,7 @@ struct rte_flow_ops {
> int (*query)
> (struct rte_eth_dev *,
> struct rte_flow *,
> - enum rte_flow_action_type,
> + const struct rte_flow_action *,
> void *,
> struct rte_flow_error *);
> /** See rte_flow_isolate(). */
> --
> 2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 14:06 ` Ori Kam
@ 2018-04-26 14:27 ` Ferruh Yigit
2018-04-26 14:43 ` Ori Kam
0 siblings, 1 reply; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-26 14:27 UTC (permalink / raw)
To: Ori Kam, Declan Doherty, dev, Matan Azrad
On 4/26/2018 3:06 PM, Ori Kam wrote:
> Hi Declan,
>
> You are changing API (port_flow_query) which is in use in
> both MLX5 and MLX4 this results in breaking the build.
Hi Ori,
Do you mean "rte_flow_query"? port_flow_query() is a function in testpmd, how
mlx is using it?
>
> Best,
> Ori
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
>> Sent: Thursday, April 26, 2018 3:08 PM
>> To: dev@dpdk.org
>> Cc: Ferruh Yigit <ferruh.yigit@intel.com>; Declan Doherty
>> <declan.doherty@intel.com>
>> Subject: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to
>> rte_flow
>>
>> Add rte_flow_action_count action data structure to enable shared
>> counters across multiple flows on a single port or across multiple
>> flows on multiple ports within the same switch domain. Also this enables
>> multiple count actions to be specified in a single flow action.
>>
>> This patch also modifies the existing rte_flow_query API to take the
>> rte_flow_action structure as an input parameter instead of the
>> rte_flow_action_type enumeration to allow querying a specific action
>> from a flow rule when multiple actions of the same type are specified.
>>
>> This patch also contains updates for the bonding and failsafe PMDs and
>> testpmd application which are affected by this API change.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> ---
>> app/test-pmd/cmdline_flow.c | 6 ++--
>> app/test-pmd/config.c | 15 +++++----
>> app/test-pmd/testpmd.h | 2 +-
>> doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++------
>> ------
>> drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
>> drivers/net/failsafe/failsafe_flow.c | 4 +--
>> lib/librte_ether/rte_flow.c | 2 +-
>> lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
>> lib/librte_ether/rte_flow_driver.h | 2 +-
>> 9 files changed, 93 insertions(+), 44 deletions(-)
>>
>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
>> index 1ac04a0ab..5754e7858 100644
>> --- a/app/test-pmd/cmdline_flow.c
>> +++ b/app/test-pmd/cmdline_flow.c
>> @@ -420,7 +420,7 @@ struct buffer {
>> } destroy; /**< Destroy arguments. */
>> struct {
>> uint32_t rule;
>> - enum rte_flow_action_type action;
>> + struct rte_flow_action action;
>> } query; /**< Query arguments. */
>> struct {
>> uint32_t *group;
>> @@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
>> .next = NEXT(NEXT_ENTRY(QUERY_ACTION),
>> NEXT_ENTRY(RULE_ID),
>> NEXT_ENTRY(PORT_ID)),
>> - .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
>> + .args = ARGS(ARGS_ENTRY(struct buffer,
>> args.query.action.type),
>> ARGS_ENTRY(struct buffer, args.query.rule),
>> ARGS_ENTRY(struct buffer, port)),
>> .call = parse_query,
>> @@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
>> break;
>> case QUERY:
>> port_flow_query(in->port, in->args.query.rule,
>> - in->args.query.action);
>> + &in->args.query.action);
>> break;
>> case LIST:
>> port_flow_list(in->port, in->args.list.group_n,
>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
>> index 0f2425229..cd6102dfc 100644
>> --- a/app/test-pmd/config.c
>> +++ b/app/test-pmd/config.c
>> @@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
>> /** Query a flow rule. */
>> int
>> port_flow_query(portid_t port_id, uint32_t rule,
>> - enum rte_flow_action_type action)
>> + const struct rte_flow_action *action)
>> {
>> struct rte_flow_error error;
>> struct rte_port *port;
>> @@ -1474,15 +1474,16 @@ port_flow_query(portid_t port_id, uint32_t rule,
>> return -ENOENT;
>> }
>> if ((unsigned int)action >= RTE_DIM(flow_action) ||
>> - !flow_action[action].name)
>> + !flow_action[action->type].name)
>> name = "unknown";
>> else
>> - name = flow_action[action].name;
>> - switch (action) {
>> + name = flow_action[action->type].name;
>> + switch (action->type) {
>> case RTE_FLOW_ACTION_TYPE_COUNT:
>> break;
>> default:
>> - printf("Cannot query action type %d (%s)\n", action, name);
>> + printf("Cannot query action type %d (%s)\n",
>> + action->type, name);
>> return -ENOTSUP;
>> }
>> /* Poisoning to make sure PMDs update it in case of error. */
>> @@ -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
>> memset(&query, 0, sizeof(query));
>> if (rte_flow_query(port_id, pf->flow, action, &query, &error))
>> return port_flow_complain(&error);
>> - switch (action) {
>> + switch (action->type) {
>> case RTE_FLOW_ACTION_TYPE_COUNT:
>> printf("%s:\n"
>> " hits_set: %u\n"
>> @@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
>> break;
>> default:
>> printf("Cannot display result for action type %d (%s)\n",
>> - action, name);
>> + action->type, name);
>> break;
>> }
>> return 0;
>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
>> index a33b525e2..1af87b8f4 100644
>> --- a/app/test-pmd/testpmd.h
>> +++ b/app/test-pmd/testpmd.h
>> @@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id,
>> int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
>> int port_flow_flush(portid_t port_id);
>> int port_flow_query(portid_t port_id, uint32_t rule,
>> - enum rte_flow_action_type action);
>> + const struct rte_flow_action *action);
>> void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
>> int port_flow_isolate(portid_t port_id, int set);
>>
>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>> index 301f8762e..88bfc87eb 100644
>> --- a/doc/guides/prog_guide/rte_flow.rst
>> +++ b/doc/guides/prog_guide/rte_flow.rst
>> @@ -1277,17 +1277,19 @@ Actions are performed in list order:
>>
>> .. table:: Mark, count then redirect
>>
>> - +-------+--------+-----------+-------+
>> - | Index | Action | Field | Value |
>> - +=======+========+===========+=======+
>> - | 0 | MARK | ``mark`` | 0x2a |
>> - +-------+--------+-----------+-------+
>> - | 1 | COUNT |
>> - +-------+--------+-----------+-------+
>> - | 2 | QUEUE | ``queue`` | 10 |
>> - +-------+--------+-----------+-------+
>> - | 3 | END |
>> - +-------+----------------------------+
>> + +-------+--------+------------+-------+
>> + | Index | Action | Field | Value |
>> + +=======+========+============+=======+
>> + | 0 | MARK | ``mark`` | 0x2a |
>> + +-------+--------+------------+-------+
>> + | 1 | COUNT | ``shared`` | 0 |
>> + | | +------------+-------+
>> + | | | ``id`` | 0 |
>> + +-------+--------+------------+-------+
>> + | 2 | QUEUE | ``queue`` | 10 |
>> + +-------+--------+------------+-------+
>> + | 3 | END |
>> + +-------+-----------------------------+
>>
>> |
>>
>> @@ -1516,23 +1518,36 @@ Drop packets.
>> Action: ``COUNT``
>> ^^^^^^^^^^^^^^^^^
>>
>> -Enables counters for this rule.
>> +Adds a counter action to a matched flow.
>> +
>> +If more than one count action is specified in a single flow rule, then each
>> +action must specify a unique id.
>>
>> -These counters can be retrieved and reset through ``rte_flow_query()``, see
>> +Counters can be retrieved and reset through ``rte_flow_query()``, see
>> ``struct rte_flow_query_count``.
>>
>> -- Counters can be retrieved with ``rte_flow_query()``.
>> -- No configurable properties.
>> +The shared flag indicates whether the counter is unique to the flow rule the
>> +action is specified with, or whether it is a shared counter.
>> +
>> +For a count action with the shared flag set, then then a global device
>> +namespace is assumed for the counter id, so that any matched flow rules
>> using
>> +a count action with the same counter id on the same port will contribute to
>> +that counter.
>> +
>> +For ports within the same switch domain then the counter id namespace
>> extends
>> +to all ports within that switch domain.
>>
>> .. _table_rte_flow_action_count:
>>
>> .. table:: COUNT
>>
>> - +---------------+
>> - | Field |
>> - +===============+
>> - | no properties |
>> - +---------------+
>> + +------------+---------------------+
>> + | Field | Value |
>> + +============+=====================+
>> + | ``shared`` | shared counter flag |
>> + +------------+---------------------+
>> + | ``id`` | counter id |
>> + +------------+---------------------+
>>
>> Query structure to retrieve and reset flow rule counters:
>>
>> @@ -2282,7 +2297,7 @@ definition.
>> int
>> rte_flow_query(uint16_t port_id,
>> struct rte_flow *flow,
>> - enum rte_flow_action_type action,
>> + const struct rte_flow_action *action,
>> void *data,
>> struct rte_flow_error *error);
>>
>> @@ -2290,7 +2305,7 @@ Arguments:
>>
>> - ``port_id``: port identifier of Ethernet device.
>> - ``flow``: flow rule handle to query.
>> -- ``action``: action type to query.
>> +- ``action``: action to query, this must match prototype from flow rule.
>> - ``data``: pointer to storage for the associated query data type.
>> - ``error``: perform verbose error reporting if not NULL. PMDs initialize
>> this structure in case of error only.
>> diff --git a/drivers/net/bonding/rte_eth_bond_flow.c
>> b/drivers/net/bonding/rte_eth_bond_flow.c
>> index 8093c04f5..31e4bcaeb 100644
>> --- a/drivers/net/bonding/rte_eth_bond_flow.c
>> +++ b/drivers/net/bonding/rte_eth_bond_flow.c
>> @@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct
>> rte_flow_error *err)
>>
>> static int
>> bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
>> + const struct rte_flow_action *action,
>> struct rte_flow_query_count *count,
>> struct rte_flow_error *err)
>> {
>> @@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev *dev,
>> struct rte_flow *flow,
>> rte_memcpy(&slave_count, count, sizeof(slave_count));
>> for (i = 0; i < internals->slave_count; i++) {
>> ret = rte_flow_query(internals->slaves[i].port_id,
>> - flow->flows[i],
>> RTE_FLOW_ACTION_TYPE_COUNT,
>> + flow->flows[i], action,
>> &slave_count, err);
>> if (unlikely(ret != 0)) {
>> RTE_BOND_LOG(ERR, "Failed to query flow on"
>> @@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev *dev,
>> struct rte_flow *flow,
>>
>> static int
>> bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
>> - enum rte_flow_action_type type, void *arg,
>> + const struct rte_flow_action *action, void *arg,
>> struct rte_flow_error *err)
>> {
>> - switch (type) {
>> + switch (action->type) {
>> case RTE_FLOW_ACTION_TYPE_COUNT:
>> - return bond_flow_query_count(dev, flow, arg, err);
>> + return bond_flow_query_count(dev, flow, action, arg, err);
>> default:
>> return rte_flow_error_set(err, ENOTSUP,
>> RTE_FLOW_ERROR_TYPE_ACTION,
>> arg,
>> diff --git a/drivers/net/failsafe/failsafe_flow.c
>> b/drivers/net/failsafe/failsafe_flow.c
>> index a97f4075d..bfe42fcee 100644
>> --- a/drivers/net/failsafe/failsafe_flow.c
>> +++ b/drivers/net/failsafe/failsafe_flow.c
>> @@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
>> static int
>> fs_flow_query(struct rte_eth_dev *dev,
>> struct rte_flow *flow,
>> - enum rte_flow_action_type type,
>> + const struct rte_flow_action *action,
>> void *arg,
>> struct rte_flow_error *error)
>> {
>> @@ -185,7 +185,7 @@ fs_flow_query(struct rte_eth_dev *dev,
>> if (sdev != NULL) {
>> int ret = rte_flow_query(PORT_ID(sdev),
>> flow->flows[SUB_ID(sdev)],
>> - type, arg, error);
>> + action, arg, error);
>>
>> if ((ret = fs_err(sdev, ret))) {
>> fs_unlock(dev, 0);
>> diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c
>> index 4f94ac9b5..7947529da 100644
>> --- a/lib/librte_ether/rte_flow.c
>> +++ b/lib/librte_ether/rte_flow.c
>> @@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id,
>> int
>> rte_flow_query(uint16_t port_id,
>> struct rte_flow *flow,
>> - enum rte_flow_action_type action,
>> + const struct rte_flow_action *action,
>> void *data,
>> struct rte_flow_error *error)
>> {
>> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
>> index d390bbf5a..f8ba71cdb 100644
>> --- a/lib/librte_ether/rte_flow.h
>> +++ b/lib/librte_ether/rte_flow.h
>> @@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
>> * These counters can be retrieved and reset through
>> rte_flow_query(),
>> * see struct rte_flow_query_count.
>> *
>> - * No associated configuration structure.
>> + * See struct rte_flow_action_count.
>> */
>> RTE_FLOW_ACTION_TYPE_COUNT,
>>
>> @@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
>> uint16_t index; /**< Queue index to use. */
>> };
>>
>> +
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this structure may change without prior notice
>> + *
>> + * RTE_FLOW_ACTION_TYPE_COUNT
>> + *
>> + * Adds a counter action to a matched flow.
>> + *
>> + * If more than one count action is specified in a single flow rule, then each
>> + * action must specify a unique id.
>> + *
>> + * Counters can be retrieved and reset through ``rte_flow_query()``, see
>> + * ``struct rte_flow_query_count``.
>> + *
>> + * The shared flag indicates whether the counter is unique to the flow rule
>> the
>> + * action is specified with, or whether it is a shared counter.
>> + *
>> + * For a count action with the shared flag set, then then a global device
>> + * namespace is assumed for the counter id, so that any matched flow rules
>> using
>> + * a count action with the same counter id on the same port will contribute
>> to
>> + * that counter.
>> + *
>> + * For ports within the same switch domain then the counter id namespace
>> extends
>> + * to all ports within that switch domain.
>> + */
>> +struct rte_flow_action_count {
>> + uint32_t shared:1; /**< Share counter ID with other flow rules. */
>> + uint32_t reserved:31; /**< Reserved, must be zero. */
>> + uint32_t id; /**< Counter ID. */
>> +};
>> +
>> /**
>> * RTE_FLOW_ACTION_TYPE_COUNT (query)
>> *
>> @@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
>> * @param flow
>> * Flow rule handle to query.
>> * @param action
>> - * Action type to query.
>> + * Action definition as defined in original flow rule.
>> * @param[in, out] data
>> * Pointer to storage for the associated query data type.
>> * @param[out] error
>> @@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id,
>> int
>> rte_flow_query(uint16_t port_id,
>> struct rte_flow *flow,
>> - enum rte_flow_action_type action,
>> + const struct rte_flow_action *action,
>> void *data,
>> struct rte_flow_error *error);
>>
>> diff --git a/lib/librte_ether/rte_flow_driver.h
>> b/lib/librte_ether/rte_flow_driver.h
>> index 3800310ba..1c90c600d 100644
>> --- a/lib/librte_ether/rte_flow_driver.h
>> +++ b/lib/librte_ether/rte_flow_driver.h
>> @@ -88,7 +88,7 @@ struct rte_flow_ops {
>> int (*query)
>> (struct rte_eth_dev *,
>> struct rte_flow *,
>> - enum rte_flow_action_type,
>> + const struct rte_flow_action *,
>> void *,
>> struct rte_flow_error *);
>> /** See rte_flow_isolate(). */
>> --
>> 2.14.3
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 14:27 ` Ferruh Yigit
@ 2018-04-26 14:43 ` Ori Kam
2018-04-26 14:48 ` Doherty, Declan
0 siblings, 1 reply; 23+ messages in thread
From: Ori Kam @ 2018-04-26 14:43 UTC (permalink / raw)
To: Ferruh Yigit, Declan Doherty, dev, Matan Azrad, Shahaf Shuler
Hi,
PSB
Ori
> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, April 26, 2018 5:28 PM
> To: Ori Kam <orika@mellanox.com>; Declan Doherty
> <declan.doherty@intel.com>; dev@dpdk.org; Matan Azrad
> <matan@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support
> to rte_flow
>
> On 4/26/2018 3:06 PM, Ori Kam wrote:
> > Hi Declan,
> >
> > You are changing API (port_flow_query) which is in use in both MLX5
> > and MLX4 this results in breaking the build.
>
> Hi Ori,
>
> Do you mean "rte_flow_query"? port_flow_query() is a function in testpmd,
> how mlx is using it?
>
My bad let me be clearer.
MLX5 and MLX4 are implementing the rte_flow_ops query function.
This patch changes the prototype for the query function which results in
compilation error and should also result in some change in the MLX5 and MLX4 implementation.
> >
> > Best,
> > Ori
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
> >> Sent: Thursday, April 26, 2018 3:08 PM
> >> To: dev@dpdk.org
> >> Cc: Ferruh Yigit <ferruh.yigit@intel.com>; Declan Doherty
> >> <declan.doherty@intel.com>
> >> Subject: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support
> >> to rte_flow
> >>
> >> Add rte_flow_action_count action data structure to enable shared
> >> counters across multiple flows on a single port or across multiple
> >> flows on multiple ports within the same switch domain. Also this
> >> enables multiple count actions to be specified in a single flow action.
> >>
> >> This patch also modifies the existing rte_flow_query API to take the
> >> rte_flow_action structure as an input parameter instead of the
> >> rte_flow_action_type enumeration to allow querying a specific action
> >> from a flow rule when multiple actions of the same type are specified.
> >>
> >> This patch also contains updates for the bonding and failsafe PMDs
> >> and testpmd application which are affected by this API change.
> >>
> >> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> >> ---
> >> app/test-pmd/cmdline_flow.c | 6 ++--
> >> app/test-pmd/config.c | 15 +++++----
> >> app/test-pmd/testpmd.h | 2 +-
> >> doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++--
> ----
> >> ------
> >> drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
> >> drivers/net/failsafe/failsafe_flow.c | 4 +--
> >> lib/librte_ether/rte_flow.c | 2 +-
> >> lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
> >> lib/librte_ether/rte_flow_driver.h | 2 +-
> >> 9 files changed, 93 insertions(+), 44 deletions(-)
> >>
> >> diff --git a/app/test-pmd/cmdline_flow.c
> >> b/app/test-pmd/cmdline_flow.c index 1ac04a0ab..5754e7858 100644
> >> --- a/app/test-pmd/cmdline_flow.c
> >> +++ b/app/test-pmd/cmdline_flow.c
> >> @@ -420,7 +420,7 @@ struct buffer {
> >> } destroy; /**< Destroy arguments. */
> >> struct {
> >> uint32_t rule;
> >> - enum rte_flow_action_type action;
> >> + struct rte_flow_action action;
> >> } query; /**< Query arguments. */
> >> struct {
> >> uint32_t *group;
> >> @@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
> >> .next = NEXT(NEXT_ENTRY(QUERY_ACTION),
> >> NEXT_ENTRY(RULE_ID),
> >> NEXT_ENTRY(PORT_ID)),
> >> - .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
> >> + .args = ARGS(ARGS_ENTRY(struct buffer,
> >> args.query.action.type),
> >> ARGS_ENTRY(struct buffer, args.query.rule),
> >> ARGS_ENTRY(struct buffer, port)),
> >> .call = parse_query,
> >> @@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
> >> break;
> >> case QUERY:
> >> port_flow_query(in->port, in->args.query.rule,
> >> - in->args.query.action);
> >> + &in->args.query.action);
> >> break;
> >> case LIST:
> >> port_flow_list(in->port, in->args.list.group_n, diff --git
> >> a/app/test-pmd/config.c b/app/test-pmd/config.c index
> >> 0f2425229..cd6102dfc 100644
> >> --- a/app/test-pmd/config.c
> >> +++ b/app/test-pmd/config.c
> >> @@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
> >> /** Query a flow rule. */
> >> int
> >> port_flow_query(portid_t port_id, uint32_t rule,
> >> - enum rte_flow_action_type action)
> >> + const struct rte_flow_action *action)
> >> {
> >> struct rte_flow_error error;
> >> struct rte_port *port;
> >> @@ -1474,15 +1474,16 @@ port_flow_query(portid_t port_id, uint32_t
> rule,
> >> return -ENOENT;
> >> }
> >> if ((unsigned int)action >= RTE_DIM(flow_action) ||
> >> - !flow_action[action].name)
> >> + !flow_action[action->type].name)
> >> name = "unknown";
> >> else
> >> - name = flow_action[action].name;
> >> - switch (action) {
> >> + name = flow_action[action->type].name;
> >> + switch (action->type) {
> >> case RTE_FLOW_ACTION_TYPE_COUNT:
> >> break;
> >> default:
> >> - printf("Cannot query action type %d (%s)\n", action, name);
> >> + printf("Cannot query action type %d (%s)\n",
> >> + action->type, name);
> >> return -ENOTSUP;
> >> }
> >> /* Poisoning to make sure PMDs update it in case of error. */ @@
> >> -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
> >> memset(&query, 0, sizeof(query));
> >> if (rte_flow_query(port_id, pf->flow, action, &query, &error))
> >> return port_flow_complain(&error);
> >> - switch (action) {
> >> + switch (action->type) {
> >> case RTE_FLOW_ACTION_TYPE_COUNT:
> >> printf("%s:\n"
> >> " hits_set: %u\n"
> >> @@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t
> rule,
> >> break;
> >> default:
> >> printf("Cannot display result for action type %d (%s)\n",
> >> - action, name);
> >> + action->type, name);
> >> break;
> >> }
> >> return 0;
> >> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> >> a33b525e2..1af87b8f4 100644
> >> --- a/app/test-pmd/testpmd.h
> >> +++ b/app/test-pmd/testpmd.h
> >> @@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id, int
> >> port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t
> >> *rule); int port_flow_flush(portid_t port_id); int
> >> port_flow_query(portid_t port_id, uint32_t rule,
> >> - enum rte_flow_action_type action);
> >> + const struct rte_flow_action *action);
> >> void port_flow_list(portid_t port_id, uint32_t n, const uint32_t
> >> *group); int port_flow_isolate(portid_t port_id, int set);
> >>
> >> diff --git a/doc/guides/prog_guide/rte_flow.rst
> >> b/doc/guides/prog_guide/rte_flow.rst
> >> index 301f8762e..88bfc87eb 100644
> >> --- a/doc/guides/prog_guide/rte_flow.rst
> >> +++ b/doc/guides/prog_guide/rte_flow.rst
> >> @@ -1277,17 +1277,19 @@ Actions are performed in list order:
> >>
> >> .. table:: Mark, count then redirect
> >>
> >> - +-------+--------+-----------+-------+
> >> - | Index | Action | Field | Value |
> >> - +=======+========+===========+=======+
> >> - | 0 | MARK | ``mark`` | 0x2a |
> >> - +-------+--------+-----------+-------+
> >> - | 1 | COUNT |
> >> - +-------+--------+-----------+-------+
> >> - | 2 | QUEUE | ``queue`` | 10 |
> >> - +-------+--------+-----------+-------+
> >> - | 3 | END |
> >> - +-------+----------------------------+
> >> + +-------+--------+------------+-------+
> >> + | Index | Action | Field | Value |
> >> + +=======+========+============+=======+
> >> + | 0 | MARK | ``mark`` | 0x2a |
> >> + +-------+--------+------------+-------+
> >> + | 1 | COUNT | ``shared`` | 0 |
> >> + | | +------------+-------+
> >> + | | | ``id`` | 0 |
> >> + +-------+--------+------------+-------+
> >> + | 2 | QUEUE | ``queue`` | 10 |
> >> + +-------+--------+------------+-------+
> >> + | 3 | END |
> >> + +-------+-----------------------------+
> >>
> >> |
> >>
> >> @@ -1516,23 +1518,36 @@ Drop packets.
> >> Action: ``COUNT``
> >> ^^^^^^^^^^^^^^^^^
> >>
> >> -Enables counters for this rule.
> >> +Adds a counter action to a matched flow.
> >> +
> >> +If more than one count action is specified in a single flow rule,
> >> +then each action must specify a unique id.
> >>
> >> -These counters can be retrieved and reset through
> >> ``rte_flow_query()``, see
> >> +Counters can be retrieved and reset through ``rte_flow_query()``,
> >> +see
> >> ``struct rte_flow_query_count``.
> >>
> >> -- Counters can be retrieved with ``rte_flow_query()``.
> >> -- No configurable properties.
> >> +The shared flag indicates whether the counter is unique to the flow
> >> +rule the action is specified with, or whether it is a shared counter.
> >> +
> >> +For a count action with the shared flag set, then then a global
> >> +device namespace is assumed for the counter id, so that any matched
> >> +flow rules
> >> using
> >> +a count action with the same counter id on the same port will
> >> +contribute to that counter.
> >> +
> >> +For ports within the same switch domain then the counter id
> >> +namespace
> >> extends
> >> +to all ports within that switch domain.
> >>
> >> .. _table_rte_flow_action_count:
> >>
> >> .. table:: COUNT
> >>
> >> - +---------------+
> >> - | Field |
> >> - +===============+
> >> - | no properties |
> >> - +---------------+
> >> + +------------+---------------------+
> >> + | Field | Value |
> >> + +============+=====================+
> >> + | ``shared`` | shared counter flag |
> >> + +------------+---------------------+
> >> + | ``id`` | counter id |
> >> + +------------+---------------------+
> >>
> >> Query structure to retrieve and reset flow rule counters:
> >>
> >> @@ -2282,7 +2297,7 @@ definition.
> >> int
> >> rte_flow_query(uint16_t port_id,
> >> struct rte_flow *flow,
> >> - enum rte_flow_action_type action,
> >> + const struct rte_flow_action *action,
> >> void *data,
> >> struct rte_flow_error *error);
> >>
> >> @@ -2290,7 +2305,7 @@ Arguments:
> >>
> >> - ``port_id``: port identifier of Ethernet device.
> >> - ``flow``: flow rule handle to query.
> >> -- ``action``: action type to query.
> >> +- ``action``: action to query, this must match prototype from flow rule.
> >> - ``data``: pointer to storage for the associated query data type.
> >> - ``error``: perform verbose error reporting if not NULL. PMDs initialize
> >> this structure in case of error only.
> >> diff --git a/drivers/net/bonding/rte_eth_bond_flow.c
> >> b/drivers/net/bonding/rte_eth_bond_flow.c
> >> index 8093c04f5..31e4bcaeb 100644
> >> --- a/drivers/net/bonding/rte_eth_bond_flow.c
> >> +++ b/drivers/net/bonding/rte_eth_bond_flow.c
> >> @@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct
> >> rte_flow_error *err)
> >>
> >> static int
> >> bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow
> >> *flow,
> >> + const struct rte_flow_action *action,
> >> struct rte_flow_query_count *count,
> >> struct rte_flow_error *err)
> >> {
> >> @@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev
> *dev,
> >> struct rte_flow *flow,
> >> rte_memcpy(&slave_count, count, sizeof(slave_count));
> >> for (i = 0; i < internals->slave_count; i++) {
> >> ret = rte_flow_query(internals->slaves[i].port_id,
> >> - flow->flows[i],
> >> RTE_FLOW_ACTION_TYPE_COUNT,
> >> + flow->flows[i], action,
> >> &slave_count, err);
> >> if (unlikely(ret != 0)) {
> >> RTE_BOND_LOG(ERR, "Failed to query flow on"
> >> @@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev
> *dev,
> >> struct rte_flow *flow,
> >>
> >> static int
> >> bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
> >> - enum rte_flow_action_type type, void *arg,
> >> + const struct rte_flow_action *action, void *arg,
> >> struct rte_flow_error *err)
> >> {
> >> - switch (type) {
> >> + switch (action->type) {
> >> case RTE_FLOW_ACTION_TYPE_COUNT:
> >> - return bond_flow_query_count(dev, flow, arg, err);
> >> + return bond_flow_query_count(dev, flow, action, arg, err);
> >> default:
> >> return rte_flow_error_set(err, ENOTSUP,
> >> RTE_FLOW_ERROR_TYPE_ACTION,
> >> arg,
> >> diff --git a/drivers/net/failsafe/failsafe_flow.c
> >> b/drivers/net/failsafe/failsafe_flow.c
> >> index a97f4075d..bfe42fcee 100644
> >> --- a/drivers/net/failsafe/failsafe_flow.c
> >> +++ b/drivers/net/failsafe/failsafe_flow.c
> >> @@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev, static
> >> int fs_flow_query(struct rte_eth_dev *dev,
> >> struct rte_flow *flow,
> >> - enum rte_flow_action_type type,
> >> + const struct rte_flow_action *action,
> >> void *arg,
> >> struct rte_flow_error *error) { @@ -185,7 +185,7 @@
> >> fs_flow_query(struct rte_eth_dev *dev,
> >> if (sdev != NULL) {
> >> int ret = rte_flow_query(PORT_ID(sdev),
> >> flow->flows[SUB_ID(sdev)],
> >> - type, arg, error);
> >> + action, arg, error);
> >>
> >> if ((ret = fs_err(sdev, ret))) {
> >> fs_unlock(dev, 0);
> >> diff --git a/lib/librte_ether/rte_flow.c
> >> b/lib/librte_ether/rte_flow.c index 4f94ac9b5..7947529da 100644
> >> --- a/lib/librte_ether/rte_flow.c
> >> +++ b/lib/librte_ether/rte_flow.c
> >> @@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id, int
> >> rte_flow_query(uint16_t port_id,
> >> struct rte_flow *flow,
> >> - enum rte_flow_action_type action,
> >> + const struct rte_flow_action *action,
> >> void *data,
> >> struct rte_flow_error *error) { diff --git
> >> a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h index
> >> d390bbf5a..f8ba71cdb 100644
> >> --- a/lib/librte_ether/rte_flow.h
> >> +++ b/lib/librte_ether/rte_flow.h
> >> @@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
> >> * These counters can be retrieved and reset through
> >> rte_flow_query(),
> >> * see struct rte_flow_query_count.
> >> *
> >> - * No associated configuration structure.
> >> + * See struct rte_flow_action_count.
> >> */
> >> RTE_FLOW_ACTION_TYPE_COUNT,
> >>
> >> @@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
> >> uint16_t index; /**< Queue index to use. */ };
> >>
> >> +
> >> +/**
> >> + * @warning
> >> + * @b EXPERIMENTAL: this structure may change without prior notice
> >> + *
> >> + * RTE_FLOW_ACTION_TYPE_COUNT
> >> + *
> >> + * Adds a counter action to a matched flow.
> >> + *
> >> + * If more than one count action is specified in a single flow rule,
> >> +then each
> >> + * action must specify a unique id.
> >> + *
> >> + * Counters can be retrieved and reset through ``rte_flow_query()``,
> >> +see
> >> + * ``struct rte_flow_query_count``.
> >> + *
> >> + * The shared flag indicates whether the counter is unique to the
> >> +flow rule
> >> the
> >> + * action is specified with, or whether it is a shared counter.
> >> + *
> >> + * For a count action with the shared flag set, then then a global
> >> + device
> >> + * namespace is assumed for the counter id, so that any matched flow
> >> + rules
> >> using
> >> + * a count action with the same counter id on the same port will
> >> + contribute
> >> to
> >> + * that counter.
> >> + *
> >> + * For ports within the same switch domain then the counter id
> >> + namespace
> >> extends
> >> + * to all ports within that switch domain.
> >> + */
> >> +struct rte_flow_action_count {
> >> + uint32_t shared:1; /**< Share counter ID with other flow rules. */
> >> + uint32_t reserved:31; /**< Reserved, must be zero. */
> >> + uint32_t id; /**< Counter ID. */
> >> +};
> >> +
> >> /**
> >> * RTE_FLOW_ACTION_TYPE_COUNT (query)
> >> *
> >> @@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
> >> * @param flow
> >> * Flow rule handle to query.
> >> * @param action
> >> - * Action type to query.
> >> + * Action definition as defined in original flow rule.
> >> * @param[in, out] data
> >> * Pointer to storage for the associated query data type.
> >> * @param[out] error
> >> @@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id, int
> >> rte_flow_query(uint16_t port_id,
> >> struct rte_flow *flow,
> >> - enum rte_flow_action_type action,
> >> + const struct rte_flow_action *action,
> >> void *data,
> >> struct rte_flow_error *error);
> >>
> >> diff --git a/lib/librte_ether/rte_flow_driver.h
> >> b/lib/librte_ether/rte_flow_driver.h
> >> index 3800310ba..1c90c600d 100644
> >> --- a/lib/librte_ether/rte_flow_driver.h
> >> +++ b/lib/librte_ether/rte_flow_driver.h
> >> @@ -88,7 +88,7 @@ struct rte_flow_ops {
> >> int (*query)
> >> (struct rte_eth_dev *,
> >> struct rte_flow *,
> >> - enum rte_flow_action_type,
> >> + const struct rte_flow_action *,
> >> void *,
> >> struct rte_flow_error *);
> >> /** See rte_flow_isolate(). */
> >> --
> >> 2.14.3
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 14:43 ` Ori Kam
@ 2018-04-26 14:48 ` Doherty, Declan
0 siblings, 0 replies; 23+ messages in thread
From: Doherty, Declan @ 2018-04-26 14:48 UTC (permalink / raw)
To: Ori Kam, Ferruh Yigit, dev, Matan Azrad, Shahaf Shuler
On 26/04/2018 3:43 PM, Ori Kam wrote:
> Hi,
>
> PSB
>
> Ori
>> -----Original Message-----
>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>> Sent: Thursday, April 26, 2018 5:28 PM
>> To: Ori Kam <orika@mellanox.com>; Declan Doherty
>> <declan.doherty@intel.com>; dev@dpdk.org; Matan Azrad
>> <matan@mellanox.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support
>> to rte_flow
>>
>> On 4/26/2018 3:06 PM, Ori Kam wrote:
>>> Hi Declan,
>>>
>>> You are changing API (port_flow_query) which is in use in both MLX5
>>> and MLX4 this results in breaking the build.
>>
>> Hi Ori,
>>
>> Do you mean "rte_flow_query"? port_flow_query() is a function in testpmd,
>> how mlx is using it?
>>
>
> My bad let me be clearer.
> MLX5 and MLX4 are implementing the rte_flow_ops query function.
> This patch changes the prototype for the query function which results in
> compilation error and should also result in some change in the MLX5 and MLX4 implementation.
>
>
Hey Ori, I don't see any references to the query in MLX4 code.
static const struct rte_flow_ops mlx4_flow_ops = {
.validate = mlx4_flow_validate,
.create = mlx4_flow_create,
.destroy = mlx4_flow_destroy,
.flush = mlx4_flow_flush,
.isolate = mlx4_flow_isolate,
};
I just looking at fixing the MLX5 code compilation, I missed that due to
the function being ifdefined out with
HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT the flag and I missed the reference.
>>>
>>> Best,
>>> Ori
>>>
>>>> -----Original Message-----
>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Declan Doherty
>>>> Sent: Thursday, April 26, 2018 3:08 PM
>>>> To: dev@dpdk.org
>>>> Cc: Ferruh Yigit <ferruh.yigit@intel.com>; Declan Doherty
>>>> <declan.doherty@intel.com>
>>>> Subject: [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support
>>>> to rte_flow
>>>>
>>>> Add rte_flow_action_count action data structure to enable shared
>>>> counters across multiple flows on a single port or across multiple
>>>> flows on multiple ports within the same switch domain. Also this
>>>> enables multiple count actions to be specified in a single flow action.
>>>>
>>>> This patch also modifies the existing rte_flow_query API to take the
>>>> rte_flow_action structure as an input parameter instead of the
>>>> rte_flow_action_type enumeration to allow querying a specific action
>>>> from a flow rule when multiple actions of the same type are specified.
>>>>
>>>> This patch also contains updates for the bonding and failsafe PMDs
>>>> and testpmd application which are affected by this API change.
>>>>
>>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>>>> ---
>>>> app/test-pmd/cmdline_flow.c | 6 ++--
>>>> app/test-pmd/config.c | 15 +++++----
>>>> app/test-pmd/testpmd.h | 2 +-
>>>> doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++--
>> ----
>>>> ------
>>>> drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
>>>> drivers/net/failsafe/failsafe_flow.c | 4 +--
>>>> lib/librte_ether/rte_flow.c | 2 +-
>>>> lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
>>>> lib/librte_ether/rte_flow_driver.h | 2 +-
>>>> 9 files changed, 93 insertions(+), 44 deletions(-)
>>>>
>>>> diff --git a/app/test-pmd/cmdline_flow.c
>>>> b/app/test-pmd/cmdline_flow.c index 1ac04a0ab..5754e7858 100644
>>>> --- a/app/test-pmd/cmdline_flow.c
>>>> +++ b/app/test-pmd/cmdline_flow.c
>>>> @@ -420,7 +420,7 @@ struct buffer {
>>>> } destroy; /**< Destroy arguments. */
>>>> struct {
>>>> uint32_t rule;
>>>> - enum rte_flow_action_type action;
>>>> + struct rte_flow_action action;
>>>> } query; /**< Query arguments. */
>>>> struct {
>>>> uint32_t *group;
>>>> @@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
>>>> .next = NEXT(NEXT_ENTRY(QUERY_ACTION),
>>>> NEXT_ENTRY(RULE_ID),
>>>> NEXT_ENTRY(PORT_ID)),
>>>> - .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
>>>> + .args = ARGS(ARGS_ENTRY(struct buffer,
>>>> args.query.action.type),
>>>> ARGS_ENTRY(struct buffer, args.query.rule),
>>>> ARGS_ENTRY(struct buffer, port)),
>>>> .call = parse_query,
>>>> @@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
>>>> break;
>>>> case QUERY:
>>>> port_flow_query(in->port, in->args.query.rule,
>>>> - in->args.query.action);
>>>> + &in->args.query.action);
>>>> break;
>>>> case LIST:
>>>> port_flow_list(in->port, in->args.list.group_n, diff --git
>>>> a/app/test-pmd/config.c b/app/test-pmd/config.c index
>>>> 0f2425229..cd6102dfc 100644
>>>> --- a/app/test-pmd/config.c
>>>> +++ b/app/test-pmd/config.c
>>>> @@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
>>>> /** Query a flow rule. */
>>>> int
>>>> port_flow_query(portid_t port_id, uint32_t rule,
>>>> - enum rte_flow_action_type action)
>>>> + const struct rte_flow_action *action)
>>>> {
>>>> struct rte_flow_error error;
>>>> struct rte_port *port;
>>>> @@ -1474,15 +1474,16 @@ port_flow_query(portid_t port_id, uint32_t
>> rule,
>>>> return -ENOENT;
>>>> }
>>>> if ((unsigned int)action >= RTE_DIM(flow_action) ||
>>>> - !flow_action[action].name)
>>>> + !flow_action[action->type].name)
>>>> name = "unknown";
>>>> else
>>>> - name = flow_action[action].name;
>>>> - switch (action) {
>>>> + name = flow_action[action->type].name;
>>>> + switch (action->type) {
>>>> case RTE_FLOW_ACTION_TYPE_COUNT:
>>>> break;
>>>> default:
>>>> - printf("Cannot query action type %d (%s)\n", action, name);
>>>> + printf("Cannot query action type %d (%s)\n",
>>>> + action->type, name);
>>>> return -ENOTSUP;
>>>> }
>>>> /* Poisoning to make sure PMDs update it in case of error. */ @@
>>>> -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
>>>> memset(&query, 0, sizeof(query));
>>>> if (rte_flow_query(port_id, pf->flow, action, &query, &error))
>>>> return port_flow_complain(&error);
>>>> - switch (action) {
>>>> + switch (action->type) {
>>>> case RTE_FLOW_ACTION_TYPE_COUNT:
>>>> printf("%s:\n"
>>>> " hits_set: %u\n"
>>>> @@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t
>> rule,
>>>> break;
>>>> default:
>>>> printf("Cannot display result for action type %d (%s)\n",
>>>> - action, name);
>>>> + action->type, name);
>>>> break;
>>>> }
>>>> return 0;
>>>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
>>>> a33b525e2..1af87b8f4 100644
>>>> --- a/app/test-pmd/testpmd.h
>>>> +++ b/app/test-pmd/testpmd.h
>>>> @@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id, int
>>>> port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t
>>>> *rule); int port_flow_flush(portid_t port_id); int
>>>> port_flow_query(portid_t port_id, uint32_t rule,
>>>> - enum rte_flow_action_type action);
>>>> + const struct rte_flow_action *action);
>>>> void port_flow_list(portid_t port_id, uint32_t n, const uint32_t
>>>> *group); int port_flow_isolate(portid_t port_id, int set);
>>>>
>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>>>> b/doc/guides/prog_guide/rte_flow.rst
>>>> index 301f8762e..88bfc87eb 100644
>>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>>> @@ -1277,17 +1277,19 @@ Actions are performed in list order:
>>>>
>>>> .. table:: Mark, count then redirect
>>>>
>>>> - +-------+--------+-----------+-------+
>>>> - | Index | Action | Field | Value |
>>>> - +=======+========+===========+=======+
>>>> - | 0 | MARK | ``mark`` | 0x2a |
>>>> - +-------+--------+-----------+-------+
>>>> - | 1 | COUNT |
>>>> - +-------+--------+-----------+-------+
>>>> - | 2 | QUEUE | ``queue`` | 10 |
>>>> - +-------+--------+-----------+-------+
>>>> - | 3 | END |
>>>> - +-------+----------------------------+
>>>> + +-------+--------+------------+-------+
>>>> + | Index | Action | Field | Value |
>>>> + +=======+========+============+=======+
>>>> + | 0 | MARK | ``mark`` | 0x2a |
>>>> + +-------+--------+------------+-------+
>>>> + | 1 | COUNT | ``shared`` | 0 |
>>>> + | | +------------+-------+
>>>> + | | | ``id`` | 0 |
>>>> + +-------+--------+------------+-------+
>>>> + | 2 | QUEUE | ``queue`` | 10 |
>>>> + +-------+--------+------------+-------+
>>>> + | 3 | END |
>>>> + +-------+-----------------------------+
>>>>
>>>> |
>>>>
>>>> @@ -1516,23 +1518,36 @@ Drop packets.
>>>> Action: ``COUNT``
>>>> ^^^^^^^^^^^^^^^^^
>>>>
>>>> -Enables counters for this rule.
>>>> +Adds a counter action to a matched flow.
>>>> +
>>>> +If more than one count action is specified in a single flow rule,
>>>> +then each action must specify a unique id.
>>>>
>>>> -These counters can be retrieved and reset through
>>>> ``rte_flow_query()``, see
>>>> +Counters can be retrieved and reset through ``rte_flow_query()``,
>>>> +see
>>>> ``struct rte_flow_query_count``.
>>>>
>>>> -- Counters can be retrieved with ``rte_flow_query()``.
>>>> -- No configurable properties.
>>>> +The shared flag indicates whether the counter is unique to the flow
>>>> +rule the action is specified with, or whether it is a shared counter.
>>>> +
>>>> +For a count action with the shared flag set, then then a global
>>>> +device namespace is assumed for the counter id, so that any matched
>>>> +flow rules
>>>> using
>>>> +a count action with the same counter id on the same port will
>>>> +contribute to that counter.
>>>> +
>>>> +For ports within the same switch domain then the counter id
>>>> +namespace
>>>> extends
>>>> +to all ports within that switch domain.
>>>>
>>>> .. _table_rte_flow_action_count:
>>>>
>>>> .. table:: COUNT
>>>>
>>>> - +---------------+
>>>> - | Field |
>>>> - +===============+
>>>> - | no properties |
>>>> - +---------------+
>>>> + +------------+---------------------+
>>>> + | Field | Value |
>>>> + +============+=====================+
>>>> + | ``shared`` | shared counter flag |
>>>> + +------------+---------------------+
>>>> + | ``id`` | counter id |
>>>> + +------------+---------------------+
>>>>
>>>> Query structure to retrieve and reset flow rule counters:
>>>>
>>>> @@ -2282,7 +2297,7 @@ definition.
>>>> int
>>>> rte_flow_query(uint16_t port_id,
>>>> struct rte_flow *flow,
>>>> - enum rte_flow_action_type action,
>>>> + const struct rte_flow_action *action,
>>>> void *data,
>>>> struct rte_flow_error *error);
>>>>
>>>> @@ -2290,7 +2305,7 @@ Arguments:
>>>>
>>>> - ``port_id``: port identifier of Ethernet device.
>>>> - ``flow``: flow rule handle to query.
>>>> -- ``action``: action type to query.
>>>> +- ``action``: action to query, this must match prototype from flow rule.
>>>> - ``data``: pointer to storage for the associated query data type.
>>>> - ``error``: perform verbose error reporting if not NULL. PMDs initialize
>>>> this structure in case of error only.
>>>> diff --git a/drivers/net/bonding/rte_eth_bond_flow.c
>>>> b/drivers/net/bonding/rte_eth_bond_flow.c
>>>> index 8093c04f5..31e4bcaeb 100644
>>>> --- a/drivers/net/bonding/rte_eth_bond_flow.c
>>>> +++ b/drivers/net/bonding/rte_eth_bond_flow.c
>>>> @@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct
>>>> rte_flow_error *err)
>>>>
>>>> static int
>>>> bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow
>>>> *flow,
>>>> + const struct rte_flow_action *action,
>>>> struct rte_flow_query_count *count,
>>>> struct rte_flow_error *err)
>>>> {
>>>> @@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev
>> *dev,
>>>> struct rte_flow *flow,
>>>> rte_memcpy(&slave_count, count, sizeof(slave_count));
>>>> for (i = 0; i < internals->slave_count; i++) {
>>>> ret = rte_flow_query(internals->slaves[i].port_id,
>>>> - flow->flows[i],
>>>> RTE_FLOW_ACTION_TYPE_COUNT,
>>>> + flow->flows[i], action,
>>>> &slave_count, err);
>>>> if (unlikely(ret != 0)) {
>>>> RTE_BOND_LOG(ERR, "Failed to query flow on"
>>>> @@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev
>> *dev,
>>>> struct rte_flow *flow,
>>>>
>>>> static int
>>>> bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
>>>> - enum rte_flow_action_type type, void *arg,
>>>> + const struct rte_flow_action *action, void *arg,
>>>> struct rte_flow_error *err)
>>>> {
>>>> - switch (type) {
>>>> + switch (action->type) {
>>>> case RTE_FLOW_ACTION_TYPE_COUNT:
>>>> - return bond_flow_query_count(dev, flow, arg, err);
>>>> + return bond_flow_query_count(dev, flow, action, arg, err);
>>>> default:
>>>> return rte_flow_error_set(err, ENOTSUP,
>>>> RTE_FLOW_ERROR_TYPE_ACTION,
>>>> arg,
>>>> diff --git a/drivers/net/failsafe/failsafe_flow.c
>>>> b/drivers/net/failsafe/failsafe_flow.c
>>>> index a97f4075d..bfe42fcee 100644
>>>> --- a/drivers/net/failsafe/failsafe_flow.c
>>>> +++ b/drivers/net/failsafe/failsafe_flow.c
>>>> @@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev, static
>>>> int fs_flow_query(struct rte_eth_dev *dev,
>>>> struct rte_flow *flow,
>>>> - enum rte_flow_action_type type,
>>>> + const struct rte_flow_action *action,
>>>> void *arg,
>>>> struct rte_flow_error *error) { @@ -185,7 +185,7 @@
>>>> fs_flow_query(struct rte_eth_dev *dev,
>>>> if (sdev != NULL) {
>>>> int ret = rte_flow_query(PORT_ID(sdev),
>>>> flow->flows[SUB_ID(sdev)],
>>>> - type, arg, error);
>>>> + action, arg, error);
>>>>
>>>> if ((ret = fs_err(sdev, ret))) {
>>>> fs_unlock(dev, 0);
>>>> diff --git a/lib/librte_ether/rte_flow.c
>>>> b/lib/librte_ether/rte_flow.c index 4f94ac9b5..7947529da 100644
>>>> --- a/lib/librte_ether/rte_flow.c
>>>> +++ b/lib/librte_ether/rte_flow.c
>>>> @@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id, int
>>>> rte_flow_query(uint16_t port_id,
>>>> struct rte_flow *flow,
>>>> - enum rte_flow_action_type action,
>>>> + const struct rte_flow_action *action,
>>>> void *data,
>>>> struct rte_flow_error *error) { diff --git
>>>> a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h index
>>>> d390bbf5a..f8ba71cdb 100644
>>>> --- a/lib/librte_ether/rte_flow.h
>>>> +++ b/lib/librte_ether/rte_flow.h
>>>> @@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
>>>> * These counters can be retrieved and reset through
>>>> rte_flow_query(),
>>>> * see struct rte_flow_query_count.
>>>> *
>>>> - * No associated configuration structure.
>>>> + * See struct rte_flow_action_count.
>>>> */
>>>> RTE_FLOW_ACTION_TYPE_COUNT,
>>>>
>>>> @@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
>>>> uint16_t index; /**< Queue index to use. */ };
>>>>
>>>> +
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this structure may change without prior notice
>>>> + *
>>>> + * RTE_FLOW_ACTION_TYPE_COUNT
>>>> + *
>>>> + * Adds a counter action to a matched flow.
>>>> + *
>>>> + * If more than one count action is specified in a single flow rule,
>>>> +then each
>>>> + * action must specify a unique id.
>>>> + *
>>>> + * Counters can be retrieved and reset through ``rte_flow_query()``,
>>>> +see
>>>> + * ``struct rte_flow_query_count``.
>>>> + *
>>>> + * The shared flag indicates whether the counter is unique to the
>>>> +flow rule
>>>> the
>>>> + * action is specified with, or whether it is a shared counter.
>>>> + *
>>>> + * For a count action with the shared flag set, then then a global
>>>> + device
>>>> + * namespace is assumed for the counter id, so that any matched flow
>>>> + rules
>>>> using
>>>> + * a count action with the same counter id on the same port will
>>>> + contribute
>>>> to
>>>> + * that counter.
>>>> + *
>>>> + * For ports within the same switch domain then the counter id
>>>> + namespace
>>>> extends
>>>> + * to all ports within that switch domain.
>>>> + */
>>>> +struct rte_flow_action_count {
>>>> + uint32_t shared:1; /**< Share counter ID with other flow rules. */
>>>> + uint32_t reserved:31; /**< Reserved, must be zero. */
>>>> + uint32_t id; /**< Counter ID. */
>>>> +};
>>>> +
>>>> /**
>>>> * RTE_FLOW_ACTION_TYPE_COUNT (query)
>>>> *
>>>> @@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
>>>> * @param flow
>>>> * Flow rule handle to query.
>>>> * @param action
>>>> - * Action type to query.
>>>> + * Action definition as defined in original flow rule.
>>>> * @param[in, out] data
>>>> * Pointer to storage for the associated query data type.
>>>> * @param[out] error
>>>> @@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id, int
>>>> rte_flow_query(uint16_t port_id,
>>>> struct rte_flow *flow,
>>>> - enum rte_flow_action_type action,
>>>> + const struct rte_flow_action *action,
>>>> void *data,
>>>> struct rte_flow_error *error);
>>>>
>>>> diff --git a/lib/librte_ether/rte_flow_driver.h
>>>> b/lib/librte_ether/rte_flow_driver.h
>>>> index 3800310ba..1c90c600d 100644
>>>> --- a/lib/librte_ether/rte_flow_driver.h
>>>> +++ b/lib/librte_ether/rte_flow_driver.h
>>>> @@ -88,7 +88,7 @@ struct rte_flow_ops {
>>>> int (*query)
>>>> (struct rte_eth_dev *,
>>>> struct rte_flow *,
>>>> - enum rte_flow_action_type,
>>>> + const struct rte_flow_action *,
>>>> void *,
>>>> struct rte_flow_error *);
>>>> /** See rte_flow_isolate(). */
>>>> --
>>>> 2.14.3
>>>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v7 0/4] ethdev: add shared counter support to rte_flow
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
` (3 preceding siblings ...)
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
@ 2018-04-26 17:29 ` Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
` (4 more replies)
2018-04-27 20:18 ` [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Michael Wildt
5 siblings, 5 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 17:29 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
This patchset contains the revised proposal to manage
tunnel endpoints hardware accleration based on community
feedback on RFC
(http://dpdk.org/ml/archives/dev/2017-December/084676.html). This
proposal is purely enabled through rte_flow APIs with the
additions of some new features which were previously implemented
by the proposed rte_tep APIs which were proposed in the original
RFC. This patchset ultimately aims to enable the configuration
of inline data path encapsulation and decapsulation of tunnel
endpoint network overlays on accelerated IO devices.
V2:
- Split new functions into separate patches, and add additional
documentaiton.
V3:
- Extended the descriptio:wn of group counter in documentation.
- Renamed VTEP to TUNNEL.
- Fixed C99 syntax.
V4:
- Modify encap/decap actions to be protocol specific
- rename group action type to jump
- add mark flow item type in place of metadata flow/action types
- add count action data structure
- modify query API to accept rte_flow_action structure in place of
rte_flow_actio_type enumeration to support specification and
querying of multiple actions of the same type
V5:
- Documentation and comment updates
- Mark new API structures as experimental
- squash new function testpmd enablement into relevant patches.
V6:
- rebased to head of next-net
- fixed whitespace issues add in previous revision
V7:
- fix mlx5 compliation issue due to change in flow query API
The summary of the additions to the rte_flow are as follows:
- Add new flow actions RTE_RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP and
RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP to rte_flow to support
specfication of encapsulation and decapsulation of VXLAN and NVGRE
tunnels in hardware.
- Introduces support for the use of pipeline metadata in
the flow pattern definition and the population of metadata fields
from flow actions using the MARK flow and action items.
- Add shared flag to counters to enable statistics to be kept on
groups offlows such as all ingress/egress flows of a tunnel
- Adds jump_action to allow a flows to be redirected to a group
within the device.
A high level summary of the proposed usage model is as follows:
1. Decapsulation
1.1. Decapsulation of tunnel outer headers and forward all traffic
to the same queue/s or port, would have the follow flows
paramteters, sudo code used here.
struct rte_flow_attr attr = { .ingress = 1 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
{ .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
{ .type = RTE_FLOW_ACTION_TYPE_VF, .conf = &vf_action },
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
1.2.
Decapsulation of tunnel outer headers and matching on inner
headers, and forwarding to the same queue/s or port.
1.2.1.
The same scenario as above but either the application
or hardware requires configuration as 2 logically independent
operations (viewing it as 2 logical tables). The first stage
being the flow rule to define the pattern to match the tunnel
and the action to decapsulate the packet, and the second stage
stage table matches the inner header and defines the actions,
forward to port etc.
flow rule for outer header on table 0
struct rte_flow_attr attr = { .ingress = 1, .table = 0 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
{ .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_item_count shared_couter = {
.shared = 1,
.id = {counter_id}
}
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &shared_counter },
{ .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &mark_action },
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
{
.type = RTE_FLOW_ACTION_TYPE_JUMP,
.conf = { .group = 1 }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
flow rule for inner header on table 1
struct rte_flow_attr attr = { .ingress = 1, .group = 1 };
struct rte_flow_item_mark mark_item = { id = {mark_id} };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_MARK, .spec = &mark_item },
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action actions[] = {
{
.type = RTE_FLOW_ACTION_TYPE_PORT_ID,
.conf = &port_id_action = { port_id }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
}
Note that the mark action in the flow rule in group 0 is generating
the value in the pipeline which is then used in as part as the flow
pattern in group 1 to specify the exact flow to match against. In the
case where exact match rules are being provided by the application
explicitly then the MARK item value can be provided by the application
in the flow pattern for the flow rule in group 1 also.
2. Encapsulation
Encapsulation of all traffic matching a specific flow pattern to a
specified tunnel and egressing to a particular port.
struct rte_flow_attr attr = { .egress = 1 };
struct rte_flow_item pattern[] = {
{ .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
{ .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
{ .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
{ .type = RTE_FLOW_ITEM_TYPE_END }
};
struct rte_flow_action_tunnel_encap vxlan_encap_action = {
.definition = {
{ .type=eth, .spec={}, .mask={} },
{ .type=ipv4, .spec={}, .mask={} },
{ .type=udp, .spec={}, .mask={} },
{ .type=vxlan, .spec={}, .mask={} }
{ .type=end }
}
};
struct rte_flow_action actions[] = {
{ .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &count } },
{ .type = RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, .conf = &vxlan_encap_action
} },
{
.type = RTE_FLOW_ACTION_TYPE_PORT_ID,
.conf = &port_id_action = { port_id }
},
{ .type = RTE_FLOW_ACTION_TYPE_END }
;
Declan Doherty (4):
ethdev: Add tunnel encap/decap actions
ethdev: Add group JUMP action
ethdev: add mark flow item to rte_flow_item_types
ethdev: add shared counter support to rte_flow
app/test-pmd/cmdline_flow.c | 51 +++++-
app/test-pmd/config.c | 19 +-
app/test-pmd/testpmd.h | 2 +-
doc/guides/prog_guide/rte_flow.rst | 257 ++++++++++++++++++++++++----
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 8 +
drivers/net/bonding/rte_eth_bond_flow.c | 9 +-
drivers/net/failsafe/failsafe_flow.c | 4 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_flow.c | 2 +-
lib/librte_ether/rte_flow.c | 2 +-
lib/librte_ether/rte_flow.h | 211 +++++++++++++++++++++--
lib/librte_ether/rte_flow_driver.h | 2 +-
12 files changed, 504 insertions(+), 65 deletions(-)
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v7 1/4] ethdev: Add tunnel encap/decap actions
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
@ 2018-04-26 17:29 ` Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action Declan Doherty
` (3 subsequent siblings)
4 siblings, 0 replies; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 17:29 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add new flow action types and associated action data structures to
support the encapsulation and decapsulation of VXLAN and NVGRE tunnel
endpoints.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP action will cause the
matching flow to be encapsulated in the tunnel endpoint overlay
defined in the [vxlan/nvgre]_encap action data.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP action will cause all
headers associated with the outer most tunnel endpoint of the specified
type for the matching flows.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 107 +++++++++++++++++++++++++++++++++++++
lib/librte_ether/rte_flow.h | 103 +++++++++++++++++++++++++++++++++++
2 files changed, 210 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 629cc90a9..e92969757 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1909,6 +1909,113 @@ Implements ``OFPAT_PUSH_MPLS`` ("push a new MPLS tag") as defined by the
| ``ethertype`` | EtherType |
+---------------+-----------+
+Action: ``VXLAN_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a VXLAN encapsulation action by encapsulating the matched flow in the
+VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items
+definition.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must define a valid
+VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local
+Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks
+over Layer 3 Networks). The pattern must be terminated with the
+RTE_FLOW_ITEM_TYPE_END item type.
+
+.. _table_rte_flow_action_vxlan_encap:
+
+.. table:: VXLAN_ENCAP
+
+ +----------------+-------------------------------------+
+ | Field | Value |
+ +================+=====================================+
+ | ``definition`` | Tunnel end-point overlay definition |
+ +----------------+-------------------------------------+
+
+.. _table_rte_flow_action_vxlan_encap_example:
+
+.. table:: IPv4 VxLAN flow pattern example.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | UDP |
+ +-------+----------+
+ | 3 | VXLAN |
+ +-------+----------+
+ | 4 | END |
+ +-------+----------+
+
+Action: ``VXLAN_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the VXLAN tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``VXLAN_DECAP``
+action is specified, must define a valid VXLAN tunnel as per RFC7348. If the
+flow pattern does not specify a valid VXLAN tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
+Action: ``NVGRE_ENCAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a NVGRE encapsulation action by encapsulating the matched flow in the
+NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item
+definition.
+
+This action modifies the payload of matched flows. The flow definition specified
+in the ``rte_flow_action_tunnel_encap`` action structure must defined a valid
+NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network
+Virtualization Using Generic Routing Encapsulation). The pattern must be
+terminated with the RTE_FLOW_ITEM_TYPE_END item type.
+
+.. _table_rte_flow_action_nvgre_encap:
+
+.. table:: NVGRE_ENCAP
+
+ +----------------+-------------------------------------+
+ | Field | Value |
+ +================+=====================================+
+ | ``definition`` | NVGRE end-point overlay definition |
+ +----------------+-------------------------------------+
+
+.. _table_rte_flow_action_nvgre_encap_example:
+
+.. table:: IPv4 NVGRE flow pattern example.
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | Ethernet |
+ +-------+----------+
+ | 1 | IPv4 |
+ +-------+----------+
+ | 2 | NVGRE |
+ +-------+----------+
+ | 3 | END |
+ +-------+----------+
+
+Action: ``NVGRE_DECAP``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Performs a decapsulation action by stripping all headers of the NVGRE tunnel
+network overlay from the matched flow.
+
+The flow items pattern defined for the flow rule with which a ``NVGRE_DECAP``
+action is specified, must define a valid NVGRE tunnel as per RFC7637. If the
+flow pattern does not specify a valid NVGRE tunnel then a
+RTE_FLOW_ERROR_TYPE_ACTION error should be returned.
+
+This action modifies the payload of matched flows.
+
Negative types
~~~~~~~~~~~~~~
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index f70056fbd..657cb9a99 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1431,6 +1431,40 @@ enum rte_flow_action_type {
* See struct rte_flow_action_of_push_mpls.
*/
RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS,
+
+ /**
+ * Encapsulate flow in VXLAN tunnel as defined in
+ * rte_flow_action_vxlan_encap action structure.
+ *
+ * See struct rte_flow_action_vxlan_encap.
+ */
+ RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP,
+
+ /**
+ * Decapsulate outer most VXLAN tunnel from matched flow.
+ *
+ * If flow pattern does not define a valid VXLAN tunnel (as specified by
+ * RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
+ * error.
+ */
+ RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
+
+ /**
+ * Encapsulate flow in NVGRE tunnel defined in the
+ * rte_flow_action_nvgre_encap action structure.
+ *
+ * See struct rte_flow_action_nvgre_encap.
+ */
+ RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP,
+
+ /**
+ * Decapsulate outer most NVGRE tunnel from matched flow.
+ *
+ * If flow pattern does not define a valid NVGRE tunnel (as specified by
+ * RFC7637) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
+ * error.
+ */
+ RTE_FLOW_ACTION_TYPE_NVGRE_DECAP,
};
/**
@@ -1678,6 +1712,75 @@ struct rte_flow_action_of_push_mpls {
};
/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
+ *
+ * VXLAN tunnel end-point encapsulation data definition
+ *
+ * The tunnel definition is provided through the flow item pattern, the
+ * provided pattern must conform to RFC7348 for the tunnel specified. The flow
+ * definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
+ * definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
+ *
+ * The mask field allows user to specify which fields in the flow item
+ * definitions can be ignored and which have valid data and can be used
+ * verbatim.
+ *
+ * Note: the last field is not used in the definition of a tunnel and can be
+ * ignored.
+ *
+ * Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
+ *
+ * - ETH / IPV4 / UDP / VXLAN / END
+ * - ETH / IPV6 / UDP / VXLAN / END
+ * - ETH / VLAN / IPV4 / UDP / VXLAN / END
+ *
+ */
+struct rte_flow_action_vxlan_encap {
+ /**
+ * Encapsulating vxlan tunnel definition
+ * (terminated by the END pattern item).
+ */
+ struct rte_flow_item *definition;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP
+ *
+ * NVGRE tunnel end-point encapsulation data definition
+ *
+ * The tunnel definition is provided through the flow item pattern the
+ * provided pattern must conform with RFC7637. The flow definition must be
+ * provided in order from the RTE_FLOW_ITEM_TYPE_ETH definition up the end item
+ * which is specified by RTE_FLOW_ITEM_TYPE_END.
+ *
+ * The mask field allows user to specify which fields in the flow item
+ * definitions can be ignored and which have valid data and can be used
+ * verbatim.
+ *
+ * Note: the last field is not used in the definition of a tunnel and can be
+ * ignored.
+ *
+ * Valid flow definition for RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP include:
+ *
+ * - ETH / IPV4 / NVGRE / END
+ * - ETH / VLAN / IPV6 / NVGRE / END
+ *
+ */
+struct rte_flow_action_nvgre_encap {
+ /**
+ * Encapsulating vxlan tunnel definition
+ * (terminated by the END pattern item).
+ */
+ struct rte_flow_item *definition;
+};
+
+/*
* Definition of a single action.
*
* A list of actions is terminated by a END action.
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
@ 2018-04-26 17:29 ` Declan Doherty
2018-04-26 18:54 ` Thomas Monjalon
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
` (2 subsequent siblings)
4 siblings, 1 reply; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 17:29 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add jump action type which defines an action which allows a matched
flow to be redirect to the specified group. This allows physical and
logical flow table/group hierarchies to be defined through rte_flow.
This breaks ABI compatibility for the following public functions (as it
modifes the ordering of the rte_flow_action_type enumeration):
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Add support for specification of new JUMP action to testpmd's flow
cli, and update the testpmd documentation to describe this new
action.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 23 +++++++++++
doc/guides/prog_guide/rte_flow.rst | 61 ++++++++++++++++++++++++-----
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++
lib/librte_ether/rte_flow.h | 41 +++++++++++++++----
4 files changed, 112 insertions(+), 17 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4239602b6..6e9fa5d7c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -183,6 +183,8 @@ enum index {
ACTION_END,
ACTION_VOID,
ACTION_PASSTHRU,
+ ACTION_JUMP,
+ ACTION_JUMP_GROUP,
ACTION_MARK,
ACTION_MARK_ID,
ACTION_FLAG,
@@ -738,6 +740,7 @@ static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
ACTION_PASSTHRU,
+ ACTION_JUMP,
ACTION_MARK,
ACTION_FLAG,
ACTION_QUEUE,
@@ -856,6 +859,12 @@ static const enum index action_of_push_mpls[] = {
ZERO,
};
+static const enum index action_jump[] = {
+ ACTION_JUMP_GROUP,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_init(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1931,6 +1940,20 @@ static const struct token token_list[] = {
.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
.call = parse_vc,
},
+ [ACTION_JUMP] = {
+ .name = "jump",
+ .help = "redirect traffic to a given group",
+ .priv = PRIV_ACTION(JUMP, sizeof(struct rte_flow_action_jump)),
+ .next = NEXT(action_jump),
+ .call = parse_vc,
+ },
+ [ACTION_JUMP_GROUP] = {
+ .name = "group",
+ .help = "group to redirect traffic to",
+ .next = NEXT(action_jump, NEXT_ENTRY(UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_jump, group)),
+ .call = parse_vc_conf,
+ },
[ACTION_MARK] = {
.name = "mark",
.help = "attach 32 bit value to packets",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e92969757..92e0f89ad 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -90,8 +90,12 @@ Thus predictable results for a given priority level can only be achieved
with non-overlapping rules, using perfect matching on all protocol layers.
Flow rules can also be grouped, the flow rule priority is specific to the
-group they belong to. All flow rules in a given group are thus processed
-either before or after another group.
+group they belong to. All flow rules in a given group are thus processed within
+the context of that group. Groups are not linked by default, so the logical
+hierarchy of groups must be explicitly defined by flow rules themselves in each
+group using the JUMP action to define the next group to redirect too. Only flow
+rules defined in the default group 0 are guarantee to be matched against, this
+makes group 0 the origin of any group hierarchy defined by an application.
Support for multiple actions per rule may be implemented internally on top
of non-default hardware priorities, as a result both features may not be
@@ -138,29 +142,34 @@ Attributes
Attribute: Group
^^^^^^^^^^^^^^^^
-Flow rules can be grouped by assigning them a common group number. Lower
-values have higher priority. Group 0 has the highest priority.
+Flow rules can be grouped by assigning them a common group number. Groups
+allow a logical hierarchy of flow rule groups (tables) to be defined. These
+groups can be supported virtually in the PMD or in the physical device.
+Group 0 is the default group and this is the only group which flows are
+guarantee to matched against, all subsequent groups can only be reached by
+way of the JUMP action from a matched flow rule.
Although optional, applications are encouraged to group similar rules as
much as possible to fully take advantage of hardware capabilities
(e.g. optimized matching) and work around limitations (e.g. a single pattern
-type possibly allowed in a given group).
+type possibly allowed in a given group), while being aware that the groups
+hierarchies must be programmed explicitly.
Note that support for more than a single group is not guaranteed.
Attribute: Priority
^^^^^^^^^^^^^^^^^^^
-A priority level can be assigned to a flow rule. Like groups, lower values
+A priority level can be assigned to a flow rule, lower values
denote higher priority, with 0 as the maximum.
-A rule with priority 0 in group 8 is always matched after a rule with
-priority 8 in group 0.
-
-Group and priority levels are arbitrary and up to the application, they do
+Priority levels are arbitrary and up to the application, they do
not need to be contiguous nor start from 0, however the maximum number
varies between devices and may be affected by existing flow rules.
+A flow which matches multiple rules in the same group will always matched by
+the rule with the highest priority in that group.
+
If a packet is matched by several rules of a given group for a given
priority level, the outcome is undefined. It can take any path, may be
duplicated or even cause unrecoverable errors.
@@ -1372,6 +1381,38 @@ flow rules:
| 2 | END |
+-------+----------------------------+
+Action: ``JUMP``
+^^^^^^^^^^^^^^^^
+
+Redirects packets to a group on the current device.
+
+In a hierarchy of groups, which can be used to represent physical or logical
+flow group/tables on the device, this action redirects the matched flow to
+the specified group on that device.
+
+If a matched flow is redirected to a table which doesn't contain a matching
+rule for that flow then the behavior is undefined and the resulting behavior
+is up to the specific device. Best practice when using groups would be define
+a default flow rule for each group which a defines the default actions in that
+group so a consistent behavior is defined.
+
+Defining an action for matched flow in a group to jump to a group which is
+higher in the group hierarchy may not be supported by physical devices,
+depending on how groups are mapped to the physical devices. In the
+definitions of jump actions, applications should be aware that it may be
+possible to define flow rules which trigger an undefined behavior causing
+flows to loop between groups.
+
+.. _table_rte_flow_action_jump:
+
+.. table:: JUMP
+
+ +-----------+------------------------------+
+ | Field | Value |
+ +===========+==============================+
+ | ``group`` | Group to redirect packets to |
+ +-----------+------------------------------+
+
Action: ``MARK``
^^^^^^^^^^^^^^^^
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2edf96dd6..260d044d5 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3453,6 +3453,10 @@ This section lists supported actions and their attributes, if any.
- ``passthru``: let subsequent rule process matched packets.
+- ``jump``: redirect traffic to group on device.
+
+ - ``group {unsigned}``: group to redirect to.
+
- ``mark``: attach 32 bit value to packets.
- ``id {unsigned}``: 32 bit value to return with packets.
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 657cb9a99..17c1c4a89 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -35,18 +35,20 @@ extern "C" {
/**
* Flow rule attributes.
*
- * Priorities are set on two levels: per group and per rule within groups.
+ * Priorities are set on a per rule based within groups.
*
- * Lower values denote higher priority, the highest priority for both levels
- * is 0, so that a rule with priority 0 in group 8 is always matched after a
- * rule with priority 8 in group 0.
+ * Lower values denote higher priority, the highest priority for a flow rule
+ * is 0, so that a flow that matches for than one rule, the rule with the
+ * lowest priority value will always be matched.
*
* Although optional, applications are encouraged to group similar rules as
* much as possible to fully take advantage of hardware capabilities
* (e.g. optimized matching) and work around limitations (e.g. a single
- * pattern type possibly allowed in a given group).
+ * pattern type possibly allowed in a given group). Applications should be
+ * aware that groups are not linked by default, and that they must be
+ * explicitly linked by the application using the JUMP action.
*
- * Group and priority levels are arbitrary and up to the application, they
+ * Priority levels are arbitrary and up to the application, they
* do not need to be contiguous nor start from 0, however the maximum number
* varies between devices and may be affected by existing flow rules.
*
@@ -69,7 +71,7 @@ extern "C" {
*/
struct rte_flow_attr {
uint32_t group; /**< Priority group. */
- uint32_t priority; /**< Priority level within group. */
+ uint32_t priority; /**< Rule priority level within group. */
uint32_t ingress:1; /**< Rule applies to ingress traffic. */
uint32_t egress:1; /**< Rule applies to egress traffic. */
/**
@@ -1236,6 +1238,15 @@ enum rte_flow_action_type {
*/
RTE_FLOW_ACTION_TYPE_PASSTHRU,
+ /**
+ * RTE_FLOW_ACTION_TYPE_JUMP
+ *
+ * Redirects packets to a group on the current device.
+ *
+ * See struct rte_flow_action_jump.
+ */
+ RTE_FLOW_ACTION_TYPE_JUMP,
+
/**
* Attaches an integer value to packets and sets PKT_RX_FDIR and
* PKT_RX_FDIR_ID mbuf flags.
@@ -1481,6 +1492,22 @@ struct rte_flow_action_mark {
uint32_t id; /**< Integer value to return with packets. */
};
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_JUMP
+ *
+ * Redirects packets to a group on the current device.
+ *
+ * In a hierarchy of groups, which can be used to represent physical or logical
+ * flow tables on the device, this action allows the action to be a redirect to
+ * a group on that device.
+ */
+struct rte_flow_action_jump {
+ uint32_t group;
+};
+
/**
* RTE_FLOW_ACTION_TYPE_QUEUE
*
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action Declan Doherty
@ 2018-04-26 17:29 ` Declan Doherty
2018-04-26 18:52 ` Thomas Monjalon
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
2018-04-26 20:58 ` [dpdk-dev] [PATCH v7 0/4] " Ferruh Yigit
4 siblings, 1 reply; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 17:29 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Introduces a new action type RTE_FLOW_ITEM_TYPE_MARK which enables
flow patterns to specify arbitrary integer values to match aginst
set by the RTE_FLOW_ACTION_TYPE_MARK action in previously matched
flows.
Add support for specification of new MARK flow item in testpmd's cli.
Update testpmd documentation to describe new MARK flow item support.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 22 +++++++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 30 +++++++++++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 ++++
lib/librte_ether/rte_flow.h | 29 ++++++++++++++++++++++++++++
4 files changed, 85 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 6e9fa5d7c..1ac04a0ab 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -91,6 +91,8 @@ enum index {
ITEM_PHY_PORT_INDEX,
ITEM_PORT_ID,
ITEM_PORT_ID_ID,
+ ITEM_MARK,
+ ITEM_MARK_ID,
ITEM_RAW,
ITEM_RAW_RELATIVE,
ITEM_RAW_SEARCH,
@@ -494,6 +496,7 @@ static const enum index next_item[] = {
ITEM_VF,
ITEM_PHY_PORT,
ITEM_PORT_ID,
+ ITEM_MARK,
ITEM_RAW,
ITEM_ETH,
ITEM_VLAN,
@@ -555,6 +558,12 @@ static const enum index item_port_id[] = {
ZERO,
};
+static const enum index item_mark[] = {
+ ITEM_MARK_ID,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index item_raw[] = {
ITEM_RAW_RELATIVE,
ITEM_RAW_SEARCH,
@@ -1289,6 +1298,19 @@ static const struct token token_list[] = {
.next = NEXT(item_port_id, NEXT_ENTRY(UNSIGNED), item_param),
.args = ARGS(ARGS_ENTRY(struct rte_flow_item_port_id, id)),
},
+ [ITEM_MARK] = {
+ .name = "mark",
+ .help = "match traffic against value set in previously matched rule",
+ .priv = PRIV_ITEM(MARK, sizeof(struct rte_flow_item_mark)),
+ .next = NEXT(item_mark),
+ .call = parse_vc,
+ },
+ [ITEM_MARK_ID] = {
+ .name = "id",
+ .help = "Integer value to match against",
+ .next = NEXT(item_mark, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_mark, id)),
+ },
[ITEM_RAW] = {
.name = "raw",
.help = "match an arbitrary byte string",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 92e0f89ad..5b23778c7 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -656,6 +656,36 @@ representor" depending on the kind of underlying device).
| ``mask`` | ``id`` | zeroed to match any port ID |
+----------+----------+-----------------------------+
+Item: ``MARK``
+^^^^^^^^^^^^^^
+
+Matches an arbitrary integer value which was set using the ``MARK`` action in
+a previously matched rule.
+
+This item can only specified once as a match criteria as the ``MARK`` action can
+only be specified once in a flow action.
+
+Note the value of MARK field is arbitrary and application defined.
+
+Depending on the underlying implementation the MARK item may be supported on
+the physical device, with virtual groups in the PMD or not at all.
+
+- Default ``mask`` matches any integer value.
+
+.. _table_rte_flow_item_mark:
+
+.. table:: MARK
+
+ +----------+----------+---------------------------+
+ | Field | Subfield | Value |
+ +==========+==========+===========================+
+ | ``spec`` | ``id`` | integer value |
+ +----------+--------------------------------------+
+ | ``last`` | ``id`` | upper range value |
+ +----------+----------+---------------------------+
+ | ``mask`` | ``id`` | zeroed to match any value |
+ +----------+----------+---------------------------+
+
Data matching item types
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 260d044d5..013a40549 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3240,6 +3240,10 @@ This section lists supported pattern items and their attributes, if any.
- ``id {unsigned}``: DPDK port ID.
+- ``mark``: match value set in previously matched flow rule using the mark action.
+
+ - ``id {unsigned}``: arbitrary integer value.
+
- ``raw``: match an arbitrary byte string.
- ``relative {boolean}``: look for pattern after the previous item.
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 17c1c4a89..d390bbf5a 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -406,6 +406,13 @@ enum rte_flow_item_type {
* See struct rte_flow_item_icmp6_nd_opt_tla_eth.
*/
RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH,
+
+ /**
+ * Matches specified mark field.
+ *
+ * See struct rte_flow_item_mark.
+ */
+ RTE_FLOW_ITEM_TYPE_MARK,
};
/**
@@ -1148,6 +1155,28 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
};
#endif
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_MARK
+ *
+ * Matches an arbitrary integer value which was set using the ``MARK`` action
+ * in a previously matched rule.
+ *
+ * This item can only be specified once as a match criteria as the ``MARK``
+ * action can only be specified once in a flow action.
+ *
+ * This value is arbitrary and application-defined. Maximum allowed value
+ * depends on the underlying implementation.
+ *
+ * Depending on the underlying implementation the MARK item may be supported on
+ * the physical device, with virtual groups in the PMD or not at all.
+ */
+struct rte_flow_item_mark {
+ uint32_t id; /**< Integer value to match against. */
+};
+
/**
* Matching pattern item definition.
*
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
` (2 preceding siblings ...)
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
@ 2018-04-26 17:29 ` Declan Doherty
2018-04-26 18:55 ` Thomas Monjalon
2018-04-26 20:58 ` [dpdk-dev] [PATCH v7 0/4] " Ferruh Yigit
4 siblings, 1 reply; 23+ messages in thread
From: Declan Doherty @ 2018-04-26 17:29 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit, Declan Doherty
Add rte_flow_action_count action data structure to enable shared
counters across multiple flows on a single port or across multiple
flows on multiple ports within the same switch domain. Also this enables
multiple count actions to be specified in a single flow action.
This patch also modifies the existing rte_flow_query API to take the
rte_flow_action structure as an input parameter instead of the
rte_flow_action_type enumeration to allow querying a specific action
from a flow rule when multiple actions of the same type are specified.
This patch also contains updates for the bonding, failsafe and mlx5 PMDs
and testpmd application which are affected by this API change.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
---
app/test-pmd/cmdline_flow.c | 6 ++--
app/test-pmd/config.c | 19 ++++++-----
app/test-pmd/testpmd.h | 2 +-
doc/guides/prog_guide/rte_flow.rst | 59 +++++++++++++++++++++------------
drivers/net/bonding/rte_eth_bond_flow.c | 9 ++---
drivers/net/failsafe/failsafe_flow.c | 4 +--
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_flow.c | 2 +-
lib/librte_ether/rte_flow.c | 2 +-
lib/librte_ether/rte_flow.h | 38 +++++++++++++++++++--
lib/librte_ether/rte_flow_driver.h | 2 +-
11 files changed, 97 insertions(+), 48 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1ac04a0ab..5754e7858 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -420,7 +420,7 @@ struct buffer {
} destroy; /**< Destroy arguments. */
struct {
uint32_t rule;
- enum rte_flow_action_type action;
+ struct rte_flow_action action;
} query; /**< Query arguments. */
struct {
uint32_t *group;
@@ -1101,7 +1101,7 @@ static const struct token token_list[] = {
.next = NEXT(NEXT_ENTRY(QUERY_ACTION),
NEXT_ENTRY(RULE_ID),
NEXT_ENTRY(PORT_ID)),
- .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action),
+ .args = ARGS(ARGS_ENTRY(struct buffer, args.query.action.type),
ARGS_ENTRY(struct buffer, args.query.rule),
ARGS_ENTRY(struct buffer, port)),
.call = parse_query,
@@ -3842,7 +3842,7 @@ cmd_flow_parsed(const struct buffer *in)
break;
case QUERY:
port_flow_query(in->port, in->args.query.rule,
- in->args.query.action);
+ &in->args.query.action);
break;
case LIST:
port_flow_list(in->port, in->args.list.group_n,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 0f2425229..c7dc1bfd0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1099,7 +1099,7 @@ static const struct {
MK_FLOW_ACTION(FLAG, 0),
MK_FLOW_ACTION(QUEUE, sizeof(struct rte_flow_action_queue)),
MK_FLOW_ACTION(DROP, 0),
- MK_FLOW_ACTION(COUNT, 0),
+ MK_FLOW_ACTION(COUNT, sizeof(struct rte_flow_action_count)),
MK_FLOW_ACTION(RSS, sizeof(struct rte_flow_action_rss)),
MK_FLOW_ACTION(PF, 0),
MK_FLOW_ACTION(VF, sizeof(struct rte_flow_action_vf)),
@@ -1452,7 +1452,7 @@ port_flow_flush(portid_t port_id)
/** Query a flow rule. */
int
port_flow_query(portid_t port_id, uint32_t rule,
- enum rte_flow_action_type action)
+ const struct rte_flow_action *action)
{
struct rte_flow_error error;
struct rte_port *port;
@@ -1473,16 +1473,17 @@ port_flow_query(portid_t port_id, uint32_t rule,
printf("Flow rule #%u not found\n", rule);
return -ENOENT;
}
- if ((unsigned int)action >= RTE_DIM(flow_action) ||
- !flow_action[action].name)
+ if ((unsigned int)action->type >= RTE_DIM(flow_action) ||
+ !flow_action[action->type].name)
name = "unknown";
else
- name = flow_action[action].name;
- switch (action) {
+ name = flow_action[action->type].name;
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
break;
default:
- printf("Cannot query action type %d (%s)\n", action, name);
+ printf("Cannot query action type %d (%s)\n",
+ action->type, name);
return -ENOTSUP;
}
/* Poisoning to make sure PMDs update it in case of error. */
@@ -1490,7 +1491,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
memset(&query, 0, sizeof(query));
if (rte_flow_query(port_id, pf->flow, action, &query, &error))
return port_flow_complain(&error);
- switch (action) {
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
printf("%s:\n"
" hits_set: %u\n"
@@ -1505,7 +1506,7 @@ port_flow_query(portid_t port_id, uint32_t rule,
break;
default:
printf("Cannot display result for action type %d (%s)\n",
- action, name);
+ action->type, name);
break;
}
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index a33b525e2..1af87b8f4 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -620,7 +620,7 @@ int port_flow_create(portid_t port_id,
int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
int port_flow_flush(portid_t port_id);
int port_flow_query(portid_t port_id, uint32_t rule,
- enum rte_flow_action_type action);
+ const struct rte_flow_action *action);
void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
int port_flow_isolate(portid_t port_id, int set);
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 5b23778c7..7af132ebf 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1277,17 +1277,19 @@ Actions are performed in list order:
.. table:: Mark, count then redirect
- +-------+--------+-----------+-------+
- | Index | Action | Field | Value |
- +=======+========+===========+=======+
- | 0 | MARK | ``mark`` | 0x2a |
- +-------+--------+-----------+-------+
- | 1 | COUNT |
- +-------+--------+-----------+-------+
- | 2 | QUEUE | ``queue`` | 10 |
- +-------+--------+-----------+-------+
- | 3 | END |
- +-------+----------------------------+
+ +-------+--------+------------+-------+
+ | Index | Action | Field | Value |
+ +=======+========+============+=======+
+ | 0 | MARK | ``mark`` | 0x2a |
+ +-------+--------+------------+-------+
+ | 1 | COUNT | ``shared`` | 0 |
+ | | +------------+-------+
+ | | | ``id`` | 0 |
+ +-------+--------+------------+-------+
+ | 2 | QUEUE | ``queue`` | 10 |
+ +-------+--------+------------+-------+
+ | 3 | END |
+ +-------+-----------------------------+
|
@@ -1516,23 +1518,36 @@ Drop packets.
Action: ``COUNT``
^^^^^^^^^^^^^^^^^
-Enables counters for this rule.
+Adds a counter action to a matched flow.
+
+If more than one count action is specified in a single flow rule, then each
+action must specify a unique id.
-These counters can be retrieved and reset through ``rte_flow_query()``, see
+Counters can be retrieved and reset through ``rte_flow_query()``, see
``struct rte_flow_query_count``.
-- Counters can be retrieved with ``rte_flow_query()``.
-- No configurable properties.
+The shared flag indicates whether the counter is unique to the flow rule the
+action is specified with, or whether it is a shared counter.
+
+For a count action with the shared flag set, then then a global device
+namespace is assumed for the counter id, so that any matched flow rules using
+a count action with the same counter id on the same port will contribute to
+that counter.
+
+For ports within the same switch domain then the counter id namespace extends
+to all ports within that switch domain.
.. _table_rte_flow_action_count:
.. table:: COUNT
- +---------------+
- | Field |
- +===============+
- | no properties |
- +---------------+
+ +------------+---------------------+
+ | Field | Value |
+ +============+=====================+
+ | ``shared`` | shared counter flag |
+ +------------+---------------------+
+ | ``id`` | counter id |
+ +------------+---------------------+
Query structure to retrieve and reset flow rule counters:
@@ -2282,7 +2297,7 @@ definition.
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error);
@@ -2290,7 +2305,7 @@ Arguments:
- ``port_id``: port identifier of Ethernet device.
- ``flow``: flow rule handle to query.
-- ``action``: action type to query.
+- ``action``: action to query, this must match prototype from flow rule.
- ``data``: pointer to storage for the associated query data type.
- ``error``: perform verbose error reporting if not NULL. PMDs initialize
this structure in case of error only.
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 8093c04f5..31e4bcaeb 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -152,6 +152,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
static int
bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
+ const struct rte_flow_action *action,
struct rte_flow_query_count *count,
struct rte_flow_error *err)
{
@@ -165,7 +166,7 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
rte_memcpy(&slave_count, count, sizeof(slave_count));
for (i = 0; i < internals->slave_count; i++) {
ret = rte_flow_query(internals->slaves[i].port_id,
- flow->flows[i], RTE_FLOW_ACTION_TYPE_COUNT,
+ flow->flows[i], action,
&slave_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
@@ -182,12 +183,12 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
static int
bond_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
- enum rte_flow_action_type type, void *arg,
+ const struct rte_flow_action *action, void *arg,
struct rte_flow_error *err)
{
- switch (type) {
+ switch (action->type) {
case RTE_FLOW_ACTION_TYPE_COUNT:
- return bond_flow_query_count(dev, flow, arg, err);
+ return bond_flow_query_count(dev, flow, action, arg, err);
default:
return rte_flow_error_set(err, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, arg,
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index a97f4075d..bfe42fcee 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -174,7 +174,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
static int
fs_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
- enum rte_flow_action_type type,
+ const struct rte_flow_action *action,
void *arg,
struct rte_flow_error *error)
{
@@ -185,7 +185,7 @@ fs_flow_query(struct rte_eth_dev *dev,
if (sdev != NULL) {
int ret = rte_flow_query(PORT_ID(sdev),
flow->flows[SUB_ID(sdev)],
- type, arg, error);
+ action, arg, error);
if ((ret = fs_err(sdev, ret))) {
fs_unlock(dev, 0);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 874978baa..c4d1d456b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -277,7 +277,7 @@ int mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
void mlx5_flow_list_flush(struct rte_eth_dev *dev, struct mlx5_flows *list);
int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error);
int mlx5_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
- enum rte_flow_action_type action, void *data,
+ const struct rte_flow_action *action, void *data,
struct rte_flow_error *error);
int mlx5_flow_isolate(struct rte_eth_dev *dev, int enable,
struct rte_flow_error *error);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8f5fcf4d6..c385f6ced 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -3079,7 +3079,7 @@ mlx5_flow_query_count(struct ibv_counter_set *cs,
int
mlx5_flow_query(struct rte_eth_dev *dev __rte_unused,
struct rte_flow *flow,
- enum rte_flow_action_type action __rte_unused,
+ const struct rte_flow_action *action __rte_unused,
void *data,
struct rte_flow_error *error)
{
diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c
index 4f94ac9b5..7947529da 100644
--- a/lib/librte_ether/rte_flow.c
+++ b/lib/librte_ether/rte_flow.c
@@ -233,7 +233,7 @@ rte_flow_flush(uint16_t port_id,
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error)
{
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index d390bbf5a..f8ba71cdb 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -1314,7 +1314,7 @@ enum rte_flow_action_type {
* These counters can be retrieved and reset through rte_flow_query(),
* see struct rte_flow_query_count.
*
- * No associated configuration structure.
+ * See struct rte_flow_action_count.
*/
RTE_FLOW_ACTION_TYPE_COUNT,
@@ -1546,6 +1546,38 @@ struct rte_flow_action_queue {
uint16_t index; /**< Queue index to use. */
};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_COUNT
+ *
+ * Adds a counter action to a matched flow.
+ *
+ * If more than one count action is specified in a single flow rule, then each
+ * action must specify a unique id.
+ *
+ * Counters can be retrieved and reset through ``rte_flow_query()``, see
+ * ``struct rte_flow_query_count``.
+ *
+ * The shared flag indicates whether the counter is unique to the flow rule the
+ * action is specified with, or whether it is a shared counter.
+ *
+ * For a count action with the shared flag set, then then a global device
+ * namespace is assumed for the counter id, so that any matched flow rules using
+ * a count action with the same counter id on the same port will contribute to
+ * that counter.
+ *
+ * For ports within the same switch domain then the counter id namespace extends
+ * to all ports within that switch domain.
+ */
+struct rte_flow_action_count {
+ uint32_t shared:1; /**< Share counter ID with other flow rules. */
+ uint32_t reserved:31; /**< Reserved, must be zero. */
+ uint32_t id; /**< Counter ID. */
+};
+
/**
* RTE_FLOW_ACTION_TYPE_COUNT (query)
*
@@ -2044,7 +2076,7 @@ rte_flow_flush(uint16_t port_id,
* @param flow
* Flow rule handle to query.
* @param action
- * Action type to query.
+ * Action definition as defined in original flow rule.
* @param[in, out] data
* Pointer to storage for the associated query data type.
* @param[out] error
@@ -2057,7 +2089,7 @@ rte_flow_flush(uint16_t port_id,
int
rte_flow_query(uint16_t port_id,
struct rte_flow *flow,
- enum rte_flow_action_type action,
+ const struct rte_flow_action *action,
void *data,
struct rte_flow_error *error);
diff --git a/lib/librte_ether/rte_flow_driver.h b/lib/librte_ether/rte_flow_driver.h
index 3800310ba..1c90c600d 100644
--- a/lib/librte_ether/rte_flow_driver.h
+++ b/lib/librte_ether/rte_flow_driver.h
@@ -88,7 +88,7 @@ struct rte_flow_ops {
int (*query)
(struct rte_eth_dev *,
struct rte_flow *,
- enum rte_flow_action_type,
+ const struct rte_flow_action *,
void *,
struct rte_flow_error *);
/** See rte_flow_isolate(). */
--
2.14.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
@ 2018-04-26 18:52 ` Thomas Monjalon
2018-04-27 12:24 ` Ferruh Yigit
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Monjalon @ 2018-04-26 18:52 UTC (permalink / raw)
To: Declan Doherty; +Cc: dev, Ferruh Yigit
26/04/2018 19:29, Declan Doherty:
> Introduces a new action type RTE_FLOW_ITEM_TYPE_MARK which enables
> flow patterns to specify arbitrary integer values to match aginst
> set by the RTE_FLOW_ACTION_TYPE_MARK action in previously matched
> flows.
In the title, I don't think you need specify rte_flow_item_types
(which doesn't exist BTW).
I suggest:
ethdev: add mark flow item
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action Declan Doherty
@ 2018-04-26 18:54 ` Thomas Monjalon
2018-04-27 12:24 ` Ferruh Yigit
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Monjalon @ 2018-04-26 18:54 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Declan Doherty
About title, we use uppercase only for acronyms.
So it should be:
ethdev: add group jump action
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
@ 2018-04-26 18:55 ` Thomas Monjalon
2018-04-27 12:25 ` Ferruh Yigit
0 siblings, 1 reply; 23+ messages in thread
From: Thomas Monjalon @ 2018-04-26 18:55 UTC (permalink / raw)
To: Declan Doherty; +Cc: dev, Ferruh Yigit
26/04/2018 19:29, Declan Doherty:
> Add rte_flow_action_count action data structure to enable shared
> counters across multiple flows on a single port or across multiple
> flows on multiple ports within the same switch domain. Also this enables
> multiple count actions to be specified in a single flow action.
>
> This patch also modifies the existing rte_flow_query API to take the
> rte_flow_action structure as an input parameter instead of the
> rte_flow_action_type enumeration to allow querying a specific action
> from a flow rule when multiple actions of the same type are specified.
>
> This patch also contains updates for the bonding, failsafe and mlx5 PMDs
> and testpmd application which are affected by this API change.
The API changes must be notified in the release notes (there is a section
for API changes).
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/4] ethdev: add shared counter support to rte_flow
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
` (3 preceding siblings ...)
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
@ 2018-04-26 20:58 ` Ferruh Yigit
4 siblings, 0 replies; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-26 20:58 UTC (permalink / raw)
To: Declan Doherty, dev
On 4/26/2018 6:29 PM, Declan Doherty wrote:
> This patchset contains the revised proposal to manage
> tunnel endpoints hardware accleration based on community
> feedback on RFC
> (http://dpdk.org/ml/archives/dev/2017-December/084676.html). This
> proposal is purely enabled through rte_flow APIs with the
> additions of some new features which were previously implemented
> by the proposed rte_tep APIs which were proposed in the original
> RFC. This patchset ultimately aims to enable the configuration
> of inline data path encapsulation and decapsulation of tunnel
> endpoint network overlays on accelerated IO devices.
>
> V2:
> - Split new functions into separate patches, and add additional
> documentaiton.
>
> V3:
> - Extended the descriptio:wn of group counter in documentation.
> - Renamed VTEP to TUNNEL.
> - Fixed C99 syntax.
>
> V4:
> - Modify encap/decap actions to be protocol specific
> - rename group action type to jump
> - add mark flow item type in place of metadata flow/action types
> - add count action data structure
> - modify query API to accept rte_flow_action structure in place of
> rte_flow_actio_type enumeration to support specification and
> querying of multiple actions of the same type
>
> V5:
> - Documentation and comment updates
> - Mark new API structures as experimental
> - squash new function testpmd enablement into relevant patches.
>
> V6:
> - rebased to head of next-net
> - fixed whitespace issues add in previous revision
>
> V7:
> - fix mlx5 compliation issue due to change in flow query API
>
> The summary of the additions to the rte_flow are as follows:
>
> - Add new flow actions RTE_RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP and
> RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP to rte_flow to support
> specfication of encapsulation and decapsulation of VXLAN and NVGRE
> tunnels in hardware.
> - Introduces support for the use of pipeline metadata in
> the flow pattern definition and the population of metadata fields
> from flow actions using the MARK flow and action items.
> - Add shared flag to counters to enable statistics to be kept on
> groups offlows such as all ingress/egress flows of a tunnel
> - Adds jump_action to allow a flows to be redirected to a group
> within the device.
>
> A high level summary of the proposed usage model is as follows:
>
> 1. Decapsulation
>
> 1.1. Decapsulation of tunnel outer headers and forward all traffic
> to the same queue/s or port, would have the follow flows
> paramteters, sudo code used here.
>
> struct rte_flow_attr attr = { .ingress = 1 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
> { .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
>
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
> { .type = RTE_FLOW_ACTION_TYPE_VF, .conf = &vf_action },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> 1.2.
>
> Decapsulation of tunnel outer headers and matching on inner
> headers, and forwarding to the same queue/s or port.
>
> 1.2.1.
>
> The same scenario as above but either the application
> or hardware requires configuration as 2 logically independent
> operations (viewing it as 2 logical tables). The first stage
> being the flow rule to define the pattern to match the tunnel
> and the action to decapsulate the packet, and the second stage
> stage table matches the inner header and defines the actions,
> forward to port etc.
>
> flow rule for outer header on table 0
>
> struct rte_flow_attr attr = { .ingress = 1, .table = 0 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
> { .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_item_count shared_couter = {
> .shared = 1,
> .id = {counter_id}
> }
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &shared_counter },
> { .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &mark_action },
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
> {
> .type = RTE_FLOW_ACTION_TYPE_JUMP,
> .conf = { .group = 1 }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> flow rule for inner header on table 1
>
> struct rte_flow_attr attr = { .ingress = 1, .group = 1 };
>
> struct rte_flow_item_mark mark_item = { id = {mark_id} };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_MARK, .spec = &mark_item },
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_action actions[] = {
> {
> .type = RTE_FLOW_ACTION_TYPE_PORT_ID,
> .conf = &port_id_action = { port_id }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> Note that the mark action in the flow rule in group 0 is generating
> the value in the pipeline which is then used in as part as the flow
> pattern in group 1 to specify the exact flow to match against. In the
> case where exact match rules are being provided by the application
> explicitly then the MARK item value can be provided by the application
> in the flow pattern for the flow rule in group 1 also.
>
> 2. Encapsulation
>
> Encapsulation of all traffic matching a specific flow pattern to a
> specified tunnel and egressing to a particular port.
>
> struct rte_flow_attr attr = { .egress = 1 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_action_tunnel_encap vxlan_encap_action = {
> .definition = {
> { .type=eth, .spec={}, .mask={} },
> { .type=ipv4, .spec={}, .mask={} },
> { .type=udp, .spec={}, .mask={} },
> { .type=vxlan, .spec={}, .mask={} }
> { .type=end }
> }
> };
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &count } },
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, .conf = &vxlan_encap_action
> } },
> {
> .type = RTE_FLOW_ACTION_TYPE_PORT_ID,
> .conf = &port_id_action = { port_id }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> ;
>
> Declan Doherty (4):
> ethdev: Add tunnel encap/decap actions
> ethdev: Add group JUMP action
> ethdev: add mark flow item to rte_flow_item_types
> ethdev: add shared counter support to rte_flow
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types
2018-04-26 18:52 ` Thomas Monjalon
@ 2018-04-27 12:24 ` Ferruh Yigit
0 siblings, 0 replies; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-27 12:24 UTC (permalink / raw)
To: Thomas Monjalon, Declan Doherty; +Cc: dev
On 4/26/2018 7:52 PM, Thomas Monjalon wrote:
> 26/04/2018 19:29, Declan Doherty:
>> Introduces a new action type RTE_FLOW_ITEM_TYPE_MARK which enables
>> flow patterns to specify arbitrary integer values to match aginst
>> set by the RTE_FLOW_ACTION_TYPE_MARK action in previously matched
>> flows.
>
> In the title, I don't think you need specify rte_flow_item_types
> (which doesn't exist BTW).
> I suggest:
> ethdev: add mark flow item
Updated on next-net.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action
2018-04-26 18:54 ` Thomas Monjalon
@ 2018-04-27 12:24 ` Ferruh Yigit
0 siblings, 0 replies; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-27 12:24 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Declan Doherty
On 4/26/2018 7:54 PM, Thomas Monjalon wrote:
> About title, we use uppercase only for acronyms.
> So it should be:
> ethdev: add group jump action
Updated on next-net.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow
2018-04-26 18:55 ` Thomas Monjalon
@ 2018-04-27 12:25 ` Ferruh Yigit
2018-04-27 14:52 ` Ferruh Yigit
0 siblings, 1 reply; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-27 12:25 UTC (permalink / raw)
To: Thomas Monjalon, Declan Doherty; +Cc: dev
On 4/26/2018 7:55 PM, Thomas Monjalon wrote:
> 26/04/2018 19:29, Declan Doherty:
>> Add rte_flow_action_count action data structure to enable shared
>> counters across multiple flows on a single port or across multiple
>> flows on multiple ports within the same switch domain. Also this enables
>> multiple count actions to be specified in a single flow action.
>>
>> This patch also modifies the existing rte_flow_query API to take the
>> rte_flow_action structure as an input parameter instead of the
>> rte_flow_action_type enumeration to allow querying a specific action
>> from a flow rule when multiple actions of the same type are specified.
>>
>> This patch also contains updates for the bonding, failsafe and mlx5 PMDs
>> and testpmd application which are affected by this API change.
>
> The API changes must be notified in the release notes (there is a section
> for API changes).
Hi Declan,
If you can send the update as a separate patch, I can squash it in next-net.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow
2018-04-27 12:25 ` Ferruh Yigit
@ 2018-04-27 14:52 ` Ferruh Yigit
0 siblings, 0 replies; 23+ messages in thread
From: Ferruh Yigit @ 2018-04-27 14:52 UTC (permalink / raw)
To: Thomas Monjalon, Declan Doherty; +Cc: dev
On 4/27/2018 1:25 PM, Ferruh Yigit wrote:
> On 4/26/2018 7:55 PM, Thomas Monjalon wrote:
>> 26/04/2018 19:29, Declan Doherty:
>>> Add rte_flow_action_count action data structure to enable shared
>>> counters across multiple flows on a single port or across multiple
>>> flows on multiple ports within the same switch domain. Also this enables
>>> multiple count actions to be specified in a single flow action.
>>>
>>> This patch also modifies the existing rte_flow_query API to take the
>>> rte_flow_action structure as an input parameter instead of the
>>> rte_flow_action_type enumeration to allow querying a specific action
>>> from a flow rule when multiple actions of the same type are specified.
>>>
>>> This patch also contains updates for the bonding, failsafe and mlx5 PMDs
>>> and testpmd application which are affected by this API change.
>>
>> The API changes must be notified in the release notes (there is a section
>> for API changes).
>
> Hi Declan,
>
> If you can send the update as a separate patch, I can squash it in next-net.
Thanks Declan, following addition squashed into related commit:
* ethdev: change flow APIs regarding count action:
* ``rte_flow_create()`` API count action now requires the ``struct
rte_flow_action_count``.
* ``rte_flow_query()`` API parameter changed from action type to action structure.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
` (4 preceding siblings ...)
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
@ 2018-04-27 20:18 ` Michael Wildt
5 siblings, 0 replies; 23+ messages in thread
From: Michael Wildt @ 2018-04-27 20:18 UTC (permalink / raw)
To: Declan Doherty; +Cc: dev, Ferruh Yigit, Ajit Kumar Khaparde
Hi Declan,
Thank you (and DPDK) for driving this proposal, much appreciated. Upon
quick review of the proposal one thing stood out.
In rte_flow.h the new tunnel action,
RLTE_FLOW_ACTION_VXLAN_ENCAP/DECAP, similar with NVGRE_ENCAP/DECAP, is
defined as:
/**
* Decapsulate outer most VXLAN tunnel from matched flow.
*
* If flow pattern does not define a valid VXLAN tunnel (as specified by
* RFC7348) then the PMD should return a RTE_FLOW_ERROR_TYPE_ACTION
* error.
*/
RTE_FLOW_ACTION_TYPE_VXLAN_DECAP,
and
* RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP
*
* VXLAN tunnel end-point encapsulation data definition
*
* The tunnel definition is provided through the flow item pattern, the
* provided pattern must conform to RFC7348 for the tunnel specified. The flow
* definition must be provided in order from the RTE_FLOW_ITEM_TYPE_ETH
* definition up the end item which is specified by RTE_FLOW_ITEM_TYPE_END.
*
* The mask field allows user to specify which fields in the flow item
* definitions can be ignored and which have valid data and can be used
* verbatim.
*
* Note: the last field is not used in the definition of a tunnel and can be
* ignored.
*
* Valid flow definition for RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP include:
*
* - ETH / IPV4 / UDP / VXLAN / END
* - ETH / IPV6 / UDP / VXLAN / END
* - ETH / VLAN / IPV4 / UDP / VXLAN / END
*
*/
Similar with doc/guides/prog_guide/rte_flow.rst.
With these specific RF7348 references it wouldn't be possible to
support customer private implementations for VXLAN, i.e. a non
compliant inner packet like eth/ipv4/udp/vxlan/no inner eth
header/ipv4. The proposed format though, by specifying the protocol
layer using the 'item' list does allow for listing of the complete
stack including the inner packet format. This could be used to trigger
the PMD to handle this flow differently than the RFC compliant VXLAN.
If the RFC reference is left in then one would have to create yet
another VXLAN action, i.e.
RTE_FLOW_ACTION_TYPE_VXLAN_CUSTOM_ENCAP/DECAP OR
RTE_FLOW_ACTION_TYPE_NONCOMPLIANT_VXLAN_ENCAP/DECAP, which I find
inappropriate even if its name describe the inner packet was missing
the ethernet header.
Would it be possible to accept that the RFC7348 compliance is removed
from these two comments as to allow for more flexibility so we do not,
potentially, drown in custom cases ?
Thanks,
Michael
On Thu, Apr 26, 2018 at 8:08 AM, Declan Doherty
<declan.doherty@intel.com> wrote:
> his patchset contains the revised proposal to manage
> tunnel endpoints hardware accleration based on community
> feedback on RFC
> (http://dpdk.org/ml/archives/dev/2017-December/084676.html). This
> proposal is purely enabled through rte_flow APIs with the
> additions of some new features which were previously implemented
> by the proposed rte_tep APIs which were proposed in the original
> RFC. This patchset ultimately aims to enable the configuration
> of inline data path encapsulation and decapsulation of tunnel
> endpoint network overlays on accelerated IO devices.
>
> V2:
> - Split new functions into separate patches, and add additional
> documentaiton.
>
> V3:
>
> - Extended the description of group counter in documentation.
> - Renamed VTEP to TUNNEL.
> - Fixed C99 syntax.
>
> V4:
>
> - Modify encap/decap actions to be protocol specific
> - rename group action type to jump
> - add mark flow item type in place of metadata flow/action types
> - add count action data structure
> - modify query API to accept rte_flow_action structure in place of
> rte_flow_actio_type enumeration to support specification and
> querying of multiple actions of the same type
>
> V5:
> - Documentation and comment updates
> - Mark new API structures as experimental
> - squash new function testpmd enablement into relevant patches.
>
> V6:
> - rebased to head of next-net
> - fixed whitespace issues add in previous revision
>
> The summary of the additions to the rte_flow are as follows:
>
> - Add new flow actions RTE_RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP and
> RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP to rte_flow to support
> specfication of encapsulation and decapsulation of VXLAN and NVGRE
> tunnels in hardware.
> - Introduces support for the use of pipeline metadata in
> the flow pattern definition and the population of metadata fields
> from flow actions using the MARK flow and action items.
> - Add shared flag to counters to enable statistics to be kept on
> groups offlows such as all ingress/egress flows of a tunnel
> - Adds jump_action to allow a flows to be redirected to a group
> within the device.
>
> A high level summary of the proposed usage model is as follows:
>
> 1. Decapsulation
>
> 1.1. Decapsulation of tunnel outer headers and forward all traffic
> to the same queue/s or port, would have the follow flows
> paramteters, sudo code used here.
>
> struct rte_flow_attr attr = { .ingress = 1 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
> { .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
>
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
> { .type = RTE_FLOW_ACTION_TYPE_VF, .conf = &vf_action },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> 1.2.
>
> Decapsulation of tunnel outer headers and matching on inner
> headers, and forwarding to the same queue/s or port.
>
> 1.2.1.
>
> The same scenario as above but either the application
> or hardware requires configuration as 2 logically independent
> operations (viewing it as 2 logical tables). The first stage
> being the flow rule to define the pattern to match the tunnel
> and the action to decapsulate the packet, and the second stage
> stage table matches the inner header and defines the actions,
> forward to port etc.
>
> flow rule for outer header on table 0
>
> struct rte_flow_attr attr = { .ingress = 1, .table = 0 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item },
> { .type = RTE_FLOW_ITEM_TYPE_VxLAN, .spec = &vxlan_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_item_count shared_couter = {
> .shared = 1,
> .id = {counter_id}
> }
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &shared_counter },
> { .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &mark_action },
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP },
> {
> .type = RTE_FLOW_ACTION_TYPE_JUMP,
> .conf = { .group = 1 }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> flow rule for inner header on table 1
>
> struct rte_flow_attr attr = { .ingress = 1, .group = 1 };
>
> struct rte_flow_item_mark mark_item = { id = {mark_id} };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_MARK, .spec = &mark_item },
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_action actions[] = {
> {
> .type = RTE_FLOW_ACTION_TYPE_PORT_ID,
> .conf = &port_id_action = { port_id }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> }
>
> Note that the mark action in the flow rule in group 0 is generating
> the value in the pipeline which is then used in as part as the flow
> pattern in group 1 to specify the exact flow to match against. In the
> case where exact match rules are being provided by the application
> explicitly then the MARK item value can be provided by the application
> in the flow pattern for the flow rule in group 1 also.
>
> 2. Encapsulation
>
> Encapsulation of all traffic matching a specific flow pattern to a
> specified tunnel and egressing to a particular port.
>
> struct rte_flow_attr attr = { .egress = 1 };
>
> struct rte_flow_item pattern[] = {
> { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item },
> { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item },
> { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item },
> { .type = RTE_FLOW_ITEM_TYPE_END }
> };
>
> struct rte_flow_action_tunnel_encap vxlan_encap_action = {
> .definition = {
> { .type=eth, .spec={}, .mask={} },
> { .type=ipv4, .spec={}, .mask={} },
> { .type=udp, .spec={}, .mask={} },
> { .type=vxlan, .spec={}, .mask={} }
> { .type=end }
> }
> };
>
> struct rte_flow_action actions[] = {
> { .type = RTE_FLOW_ACTION_TYPE_COUNT, .conf = &count } },
> { .type = RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, .conf = &vxlan_encap_action } },
> {
> .type = RTE_FLOW_ACTION_TYPE_PORT_ID,
> .conf = &port_id_action = { port_id }
> },
> { .type = RTE_FLOW_ACTION_TYPE_END }
> };
>
> Declan Doherty (4):
> ethdev: Add tunnel encap/decap actions
> ethdev: Add group JUMP action
> ethdev: add mark flow item to rte_flow_item_types
> ethdev: add shared counter support to rte_flow
>
> app/test-pmd/cmdline_flow.c | 51 +++++-
> app/test-pmd/config.c | 15 +-
> app/test-pmd/testpmd.h | 2 +-
> doc/guides/prog_guide/rte_flow.rst | 257 ++++++++++++++++++++++++----
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 8 +
> drivers/net/bonding/rte_eth_bond_flow.c | 9 +-
> drivers/net/failsafe/failsafe_flow.c | 4 +-
> lib/librte_ether/rte_flow.c | 2 +-
> lib/librte_ether/rte_flow.h | 211 +++++++++++++++++++++--
> lib/librte_ether/rte_flow_driver.h | 2 +-
> 10 files changed, 500 insertions(+), 61 deletions(-)
>
> --
> 2.14.3
>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2018-04-27 20:18 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-26 12:08 [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 2/4] ethdev: Add group JUMP action Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
2018-04-26 12:08 ` [dpdk-dev] [PATCH v6 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
2018-04-26 14:06 ` Ori Kam
2018-04-26 14:27 ` Ferruh Yigit
2018-04-26 14:43 ` Ori Kam
2018-04-26 14:48 ` Doherty, Declan
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 0/4] " Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 1/4] ethdev: Add tunnel encap/decap actions Declan Doherty
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 2/4] ethdev: Add group JUMP action Declan Doherty
2018-04-26 18:54 ` Thomas Monjalon
2018-04-27 12:24 ` Ferruh Yigit
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 3/4] ethdev: add mark flow item to rte_flow_item_types Declan Doherty
2018-04-26 18:52 ` Thomas Monjalon
2018-04-27 12:24 ` Ferruh Yigit
2018-04-26 17:29 ` [dpdk-dev] [PATCH v7 4/4] ethdev: add shared counter support to rte_flow Declan Doherty
2018-04-26 18:55 ` Thomas Monjalon
2018-04-27 12:25 ` Ferruh Yigit
2018-04-27 14:52 ` Ferruh Yigit
2018-04-26 20:58 ` [dpdk-dev] [PATCH v7 0/4] " Ferruh Yigit
2018-04-27 20:18 ` [dpdk-dev] [PATCH v6 0/4] additions to support tunnel encap/decap Michael Wildt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).