* [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item
@ 2021-03-18 7:30 Bing Zhao
2021-03-22 15:16 ` Andrew Rybchenko
` (2 more replies)
0 siblings, 3 replies; 45+ messages in thread
From: Bing Zhao @ 2021-03-18 7:30 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko; +Cc: dev
This commit introduced the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow the application should add
the conntrack action and in the following flow(s) the application
should use the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, information from packets of both directions
are required.
The conntrack action should be created on one port and supply the
peer port as a parameter to the action. After context creating, it
could only be used between the ports (dual-port mode) or a single
port. The application should modify the action via action_ctx_update
interface before each use in dual-port mode, in order to set the
correct direction for the following rte flow.
Query will be supported via action_ctx_query interface, about the
current packets information and connection status.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to take full advantage of the
conntrack. Only the valid packets should pass the conntrack, packets
with invalid TCP information, like out of window, or with invalid
header, like malformed, should not pass.
Testpmd command line example:
set conntrack [index] enable is 1 last_seq is xxx last ack is xxx /
... / orig_dir win_scale is xxx sent_end is xxx max_win is xxx ... /
rply_dir ... / end
flow action_ctx [CTX] create ingress ... / conntrack is [index] / end
flow create 0 group X ingress patterns ... / tcp / end actions action_ctx [CTX]
/ jump group Y / end
flow create 0 group Y ingress patterns ... / ct is [Valid] / end actions
queue index [hairpin queue] / end
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
lib/librte_ethdev/rte_flow.h | 191 +++++++++++++++++++++++++++++++++++
1 file changed, 191 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 669e677e91..b2e4f0751a 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -550,6 +550,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * See struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1654,6 +1663,49 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is with valid.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_STATE_VALID (1 << 0)
+/**
+ * The state of the connection was changed.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_STATE_CHANGED (1 << 1)
+/**
+ * Error state was detected on this packet for this connection.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_ERROR (1 << 2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_DISABLED (1 << 3)
+/**
+ * The packet contains some bad field(s).
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_BAD_PKT (1 << 4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2236,6 +2288,17 @@ enum rte_flow_action_type {
* See struct rte_flow_action_modify_field.
*/
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * Send packet to HW connection tracking module for examination.
+ *
+ * See struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2828,6 +2891,134 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_shared_action;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /**< SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /**< 3-way handshark was done. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /**< First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /**< First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /**< After second FIN, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+ /**< Second FIN was ACKed, connection was closed. */
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_index {
+ RTE_FLOW_CONNTRACK_INDEX_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_INDEX_SYN = (1 << 0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_INDEX_SYN_ACK = (1 << 1), /**< With SYN+ACK flag. */
+ RTE_FLOW_CONNTRACK_INDEX_FIN = (1 << 2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_INDEX_ACK = (1 << 3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_INDEX_RST = (1 << 4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ */
+struct rte_flow_tcp_dir_param {
+ uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
+ uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
+ uint32_t last_ack_seen:1;
+ /**< An ACK packet has been received by this side. */
+ uint32_t data_unacked:1;
+ /**< If set, indicates that there is unacked data of the connection. */
+ uint32_t sent_end;
+ /**< Maximal value of sequence + payload length over sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t reply_end;
+ /**< Maximal value of (ACK + window size) over received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t max_win;
+ /**< Maximal value of actual window size over sent packets. */
+ uint32_t max_ack;
+ /**< Maximal value of ACK over sent packets. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ */
+struct rte_flow_action_conntrack {
+ uint16_t peer_port; /**< The peer port number, can be the same port. */
+ uint32_t is_original_dir:1;
+ /**< Direction of this connection when creating a flow, the value only
+ * affects the subsequent flows creation.
+ */
+ uint32_t enable:1;
+ /**< Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ */
+ uint32_t live_connection:1;
+ /**< At least one ack was seen, after the connection was established. */
+ uint32_t selective_ack:1;
+ /**< Enable selective ACK on this connection. */
+ uint32_t challenge_ack_passed:1;
+ /**< A challenge ack has passed. */
+ uint32_t last_direction:1;
+ /**< 1: The last packet is seen that comes from the original direction.
+ * 0: From the reply direction.
+ */
+ uint32_t liberal_mode:1;
+ /**< No TCP check will be done except the state change. */
+ enum rte_flow_conntrack_state state;
+ /**< The current state of the connection. */
+ uint8_t max_ack_window;
+ /**< Scaling factor for maximal allowed ACK window. */
+ uint8_t retransmission_limit;
+ /**< Maximal allowed number of retransmission times. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /**< TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /**< TCP parameters of the reply direction. */
+ uint16_t last_window;
+ /**< The window value of the last packet passed this conntrack. */
+ enum rte_flow_conntrack_index last_index;
+ uint32_t last_seq;
+ /**< The sequence of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**< The acknowledgement of the last packet passed this conntrack. */
+ uint32_t last_end;
+ /**< The total value ACK + payload length of the last packet passed
+ * this conntrack.
+ */
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ struct rte_flow_action_conntrack new_ct;
+ /**< New connection tracking parameters to be updated. */
+ uint32_t direction:1; /**< The direction field will be updated. */
+ uint32_t state:1;
+ /**< All the other fields except direction will be updated. */
+ uint32_t reserved:30; /**< Reserved bits for the future usage. */
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item
2021-03-18 7:30 [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-03-22 15:16 ` Andrew Rybchenko
2021-04-07 7:43 ` Bing Zhao
2021-03-23 23:27 ` Ajit Khaparde
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
2 siblings, 1 reply; 45+ messages in thread
From: Andrew Rybchenko @ 2021-03-22 15:16 UTC (permalink / raw)
To: Bing Zhao, orika, thomas, ferruh.yigit; +Cc: dev
On 3/18/21 10:30 AM, Bing Zhao wrote:
> This commit introduced the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow the application should add
> the conntrack action and in the following flow(s) the application
> should use the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, information from packets of both directions
> are required.
>
> The conntrack action should be created on one port and supply the
> peer port as a parameter to the action. After context creating, it
> could only be used between the ports (dual-port mode) or a single
> port. The application should modify the action via action_ctx_update
> interface before each use in dual-port mode, in order to set the
> correct direction for the following rte flow.
Sorry, but "update interface before each use" sounds frightening. May be
I simply don't understand all
reasons behind.
> Query will be supported via action_ctx_query interface, about the
> current packets information and connection status.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to take full advantage of the
> conntrack. Only the valid packets should pass the conntrack, packets
> with invalid TCP information, like out of window, or with invalid
> header, like malformed, should not pass.
>
> Testpmd command line example:
>
> set conntrack [index] enable is 1 last_seq is xxx last ack is xxx /
> ... / orig_dir win_scale is xxx sent_end is xxx max_win is xxx ... /
> rply_dir ... / end
> flow action_ctx [CTX] create ingress ... / conntrack is [index] / end
> flow create 0 group X ingress patterns ... / tcp / end actions action_ctx [CTX]
> / jump group Y / end
> flow create 0 group Y ingress patterns ... / ct is [Valid] / end actions
> queue index [hairpin queue] / end
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.h | 191 +++++++++++++++++++++++++++++++++++
> 1 file changed, 191 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 669e677e91..b2e4f0751a 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -550,6 +550,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * See struct rte_flow_item_conntrack.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
>
> /**
> @@ -1654,6 +1663,49 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +/**
> + * The packet is with valid.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_STATE_VALID (1 << 0)
It sounds like conntrack state is valid, but not packet is
valid from conntrack point of view. May be:
RTE_FLOW_CONNTRACK_FLAG_PKT_VALID? Or _VALID_PKT to
go with _BAD_PKT.
> +/**
> + * The state of the connection was changed.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_STATE_CHANGED (1 << 1)
> +/**
> + * Error state was detected on this packet for this connection.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_ERROR (1 << 2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_DISABLED (1 << 3)
> +/**
> + * The packet contains some bad field(s).
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_BAD_PKT (1 << 4)
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
> + .flags = 0xffffffff,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> @@ -2236,6 +2288,17 @@ enum rte_flow_action_type {
> * See struct rte_flow_action_modify_field.
> */
> RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * Send packet to HW connection tracking module for examination.
> + *
> + * See struct rte_flow_action_conntrack.
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2828,6 +2891,134 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_shared_action;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< SYN-ACK packet was seen. */
May I suggest to put comments before enum member. IMHO it is
more readable. Comment after makes sense if it is on the same
line, otherwise, it is better to use comments before code.
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< 3-way handshark was done. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< After second FIN, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> + /**< Second FIN was ACKed, connection was closed. */
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_index {
Sorry, I don't understand why it is named conntrack_index.
> + RTE_FLOW_CONNTRACK_INDEX_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_INDEX_SYN = (1 << 0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_INDEX_SYN_ACK = (1 << 1), /**< With SYN+ACK flag. */
> + RTE_FLOW_CONNTRACK_INDEX_FIN = (1 << 2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_INDEX_ACK = (1 << 3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_INDEX_RST = (1 << 4), /**< With RST flag. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
> + uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
> + uint32_t last_ack_seen:1;
> + /**< An ACK packet has been received by this side. */
Same here about comments after fields.
> + uint32_t data_unacked:1;
> + /**< If set, indicates that there is unacked data of the connection. */
> + uint32_t sent_end;
> + /**< Maximal value of sequence + payload length over sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t reply_end;
> + /**< Maximal value of (ACK + window size) over received packet + length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t max_win;
> + /**< Maximal value of actual window size over sent packets. */
> + uint32_t max_ack;
> + /**< Maximal value of ACK over sent packets. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
> + */
> +struct rte_flow_action_conntrack {
> + uint16_t peer_port; /**< The peer port number, can be the same port. */
> + uint32_t is_original_dir:1;
> + /**< Direction of this connection when creating a flow, the value only
> + * affects the subsequent flows creation.
> + */
and here tool
> + uint32_t enable:1;
> + /**< Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + */
Does it disable entire conntrack HW module for all flows?
It sounds like this. If so - confusing.
> + uint32_t live_connection:1;
> + /**< At least one ack was seen, after the connection was established. */
> + uint32_t selective_ack:1;
> + /**< Enable selective ACK on this connection. */
> + uint32_t challenge_ack_passed:1;
> + /**< A challenge ack has passed. */
> + uint32_t last_direction:1;
> + /**< 1: The last packet is seen that comes from the original direction.
> + * 0: From the reply direction.
> + */
> + uint32_t liberal_mode:1;
> + /**< No TCP check will be done except the state change. */
> + enum rte_flow_conntrack_state state;
> + /**< The current state of the connection. */
> + uint8_t max_ack_window;
> + /**< Scaling factor for maximal allowed ACK window. */
> + uint8_t retransmission_limit;
> + /**< Maximal allowed number of retransmission times. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /**< TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /**< TCP parameters of the reply direction. */
> + uint16_t last_window;
> + /**< The window value of the last packet passed this conntrack. */
> + enum rte_flow_conntrack_index last_index;
> + uint32_t last_seq;
> + /**< The sequence of the last packet passed this conntrack. */
> + uint32_t last_ack;
> + /**< The acknowledgement of the last packet passed this conntrack. */
> + uint32_t last_end;
> + /**< The total value ACK + payload length of the last packet passed
> + * this conntrack.
> + */
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
> +struct rte_flow_modify_conntrack {
> + struct rte_flow_action_conntrack new_ct;
> + /**< New connection tracking parameters to be updated. */
and here
> + uint32_t direction:1; /**< The direction field will be updated. */
> + uint32_t state:1;
> + /**< All the other fields except direction will be updated. */
> + uint32_t reserved:30; /**< Reserved bits for the future usage. */
> +};
> +
> /**
> * Field IDs for MODIFY_FIELD action.
> */
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item
2021-03-18 7:30 [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item Bing Zhao
2021-03-22 15:16 ` Andrew Rybchenko
@ 2021-03-23 23:27 ` Ajit Khaparde
2021-04-07 2:41 ` Bing Zhao
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
2 siblings, 1 reply; 45+ messages in thread
From: Ajit Khaparde @ 2021-03-23 23:27 UTC (permalink / raw)
To: Bing Zhao
Cc: Ori Kam, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, dpdk-dev
[-- Attachment #1: Type: text/plain, Size: 10956 bytes --]
On Thu, Mar 18, 2021 at 12:30 AM Bing Zhao <bingz@nvidia.com> wrote:
>
> This commit introduced the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow the application should add
> the conntrack action and in the following flow(s) the application
> should use the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, information from packets of both directions
> are required.
>
> The conntrack action should be created on one port and supply the
> peer port as a parameter to the action. After context creating, it
> could only be used between the ports (dual-port mode) or a single
> port. The application should modify the action via action_ctx_update
> interface before each use in dual-port mode, in order to set the
> correct direction for the following rte flow.
>
> Query will be supported via action_ctx_query interface, about the
> current packets information and connection status.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to take full advantage of the
> conntrack. Only the valid packets should pass the conntrack, packets
> with invalid TCP information, like out of window, or with invalid
> header, like malformed, should not pass.
>
> Testpmd command line example:
>
> set conntrack [index] enable is 1 last_seq is xxx last ack is xxx /
> ... / orig_dir win_scale is xxx sent_end is xxx max_win is xxx ... /
> rply_dir ... / end
> flow action_ctx [CTX] create ingress ... / conntrack is [index] / end
> flow create 0 group X ingress patterns ... / tcp / end actions action_ctx [CTX]
> / jump group Y / end
> flow create 0 group Y ingress patterns ... / ct is [Valid] / end actions
> queue index [hairpin queue] / end
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.h | 191 +++++++++++++++++++++++++++++++++++
> 1 file changed, 191 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 669e677e91..b2e4f0751a 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -550,6 +550,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * See struct rte_flow_item_conntrack.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
>
> /**
> @@ -1654,6 +1663,49 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +/**
> + * The packet is with valid.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_STATE_VALID (1 << 0)
> +/**
> + * The state of the connection was changed.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_STATE_CHANGED (1 << 1)
> +/**
> + * Error state was detected on this packet for this connection.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_ERROR (1 << 2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_DISABLED (1 << 3)
> +/**
> + * The packet contains some bad field(s).
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_BAD_PKT (1 << 4)
Why not an enum? We could use the bits, but group them under an enum?
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
> + .flags = 0xffffffff,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> @@ -2236,6 +2288,17 @@ enum rte_flow_action_type {
> * See struct rte_flow_action_modify_field.
> */
> RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * Send packet to HW connection tracking module for examination.
> + *
> + * See struct rte_flow_action_conntrack.
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2828,6 +2891,134 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_shared_action;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< 3-way handshark was done. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< After second FIN, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> + /**< Second FIN was ACKed, connection was closed. */
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_index {
> + RTE_FLOW_CONNTRACK_INDEX_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_INDEX_SYN = (1 << 0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_INDEX_SYN_ACK = (1 << 1), /**< With SYN+ACK flag. */
> + RTE_FLOW_CONNTRACK_INDEX_FIN = (1 << 2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_INDEX_ACK = (1 << 3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_INDEX_RST = (1 << 4), /**< With RST flag. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
> + uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
> + uint32_t last_ack_seen:1;
> + /**< An ACK packet has been received by this side. */
> + uint32_t data_unacked:1;
> + /**< If set, indicates that there is unacked data of the connection. */
> + uint32_t sent_end;
> + /**< Maximal value of sequence + payload length over sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t reply_end;
> + /**< Maximal value of (ACK + window size) over received packet + length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t max_win;
> + /**< Maximal value of actual window size over sent packets. */
> + uint32_t max_ack;
> + /**< Maximal value of ACK over sent packets. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
Can we split the structure into set and query.
Some of the fields seem to be relevant for a query.
Also the names will be simpler and easier to understand that way.
> + */
> +struct rte_flow_action_conntrack {
> + uint16_t peer_port; /**< The peer port number, can be the same port. */
> + uint32_t is_original_dir:1;
> + /**< Direction of this connection when creating a flow, the value only
> + * affects the subsequent flows creation.
> + */
> + uint32_t enable:1;
> + /**< Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + */
We should be able to enable the block in HW implicitly based on the
rte_flow_create.
I don't think this is needed.
> + uint32_t live_connection:1;
> + /**< At least one ack was seen, after the connection was established. */
> + uint32_t selective_ack:1;
> + /**< Enable selective ACK on this connection. */
> + uint32_t challenge_ack_passed:1;
> + /**< A challenge ack has passed. */
> + uint32_t last_direction:1;
> + /**< 1: The last packet is seen that comes from the original direction.
> + * 0: From the reply direction.
> + */
> + uint32_t liberal_mode:1;
> + /**< No TCP check will be done except the state change. */
> + enum rte_flow_conntrack_state state;
initial_state or cur_state?
> + /**< The current state of the connection. */
> + uint8_t max_ack_window;
> + /**< Scaling factor for maximal allowed ACK window. */
> + uint8_t retransmission_limit;
> + /**< Maximal allowed number of retransmission times. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /**< TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /**< TCP parameters of the reply direction. */
> + uint16_t last_window;
> + /**< The window value of the last packet passed this conntrack. */
> + enum rte_flow_conntrack_index last_index;
Do you mean rte_flow_conntrack_last_state - as in last state as seen
by HW block?
Or maybe it is the TCP flag and not state?
> + uint32_t last_seq;
> + /**< The sequence of the last packet passed this conntrack. */
> + uint32_t last_ack;
> + /**< The acknowledgement of the last packet passed this conntrack. */
> + uint32_t last_end;
> + /**< The total value ACK + payload length of the last packet passed
> + * this conntrack.
> + */
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
In that case why not destroy the flow and create a new one?
> +struct rte_flow_modify_conntrack {
> + struct rte_flow_action_conntrack new_ct;
> + /**< New connection tracking parameters to be updated. */
> + uint32_t direction:1; /**< The direction field will be updated. */
> + uint32_t state:1;
> + /**< All the other fields except direction will be updated. */
> + uint32_t reserved:30; /**< Reserved bits for the future usage. */
> +};
> +
> /**
> * Field IDs for MODIFY_FIELD action.
> */
> --
> 2.19.0.windows.1
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item
2021-03-23 23:27 ` Ajit Khaparde
@ 2021-04-07 2:41 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-07 2:41 UTC (permalink / raw)
To: Ajit Khaparde
Cc: Ori Kam, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, dpdk-dev
Hello,
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Wednesday, March 24, 2021 7:27 AM
> To: Bing Zhao <bingz@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>; dpdk-dev <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [RFC] ethdev: introduce conntrack flow
> action and item
>
> On Thu, Mar 18, 2021 at 12:30 AM Bing Zhao <bingz@nvidia.com> wrote:
> >
> > This commit introduced the conntrack action and item.
> >
> > Usually the HW offloading is stateless. For some stateful
> offloading
> > like a TCP connection, HW module will help provide the ability of
> a
> > full offloading w/o SW participation after the connection was
> > established.
> >
> > The basic usage is that in the first flow the application should
> add
> > the conntrack action and in the following flow(s) the application
> > should use the conntrack item to match on the result.
> >
> > A TCP connection has two directions traffic. To set a conntrack
> > action context correctly, information from packets of both
> directions
> > are required.
> >
> > The conntrack action should be created on one port and supply the
> > peer port as a parameter to the action. After context creating, it
> > could only be used between the ports (dual-port mode) or a single
> > port. The application should modify the action via
> action_ctx_update
> > interface before each use in dual-port mode, in order to set the
> > correct direction for the following rte flow.
> >
> > Query will be supported via action_ctx_query interface, about the
> > current packets information and connection status.
> >
> > For the packets received during the conntrack setup, it is
> suggested
> > to re-inject the packets in order to take full advantage of the
> > conntrack. Only the valid packets should pass the conntrack,
> packets
> > with invalid TCP information, like out of window, or with invalid
> > header, like malformed, should not pass.
> >
> > Testpmd command line example:
> >
> > set conntrack [index] enable is 1 last_seq is xxx last ack is xxx
> /
> > ... / orig_dir win_scale is xxx sent_end is xxx max_win is xxx ...
> /
> > rply_dir ... / end
> > flow action_ctx [CTX] create ingress ... / conntrack is [index] /
> end
> > flow create 0 group X ingress patterns ... / tcp / end actions
> action_ctx [CTX]
> > / jump group Y / end
> > flow create 0 group Y ingress patterns ... / ct is [Valid] / end
> actions
> > queue index [hairpin queue] / end
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > lib/librte_ethdev/rte_flow.h | 191
> +++++++++++++++++++++++++++++++++++
> > 1 file changed, 191 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h
> b/lib/librte_ethdev/rte_flow.h
> > index 669e677e91..b2e4f0751a 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -550,6 +550,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches conntrack state.
> > + *
> > + * See struct rte_flow_item_conntrack.
> > + */
> > + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -1654,6 +1663,49 @@ rte_flow_item_geneve_opt_mask = {
> > };
> > #endif
> >
> > +/**
> > + * The packet is with valid.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_STATE_VALID (1 << 0)
> > +/**
> > + * The state of the connection was changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_STATE_CHANGED (1 << 1)
> > +/**
> > + * Error state was detected on this packet for this connection.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_ERROR (1 << 2)
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_DISABLED (1 << 3)
> > +/**
> > + * The packet contains some bad field(s).
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_BAD_PKT (1 << 4)
> Why not an enum? We could use the bits, but group them under an enum?
>
It could be. BTW, is there any convention to describe when to use #define macros and when to use enum types?
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> > + *
> > + * Matches the state of a packet after it passed the connection
> tracking
> > + * examination. The state is a bit mask of one
> RTE_FLOW_CONNTRACK_FLAG*
> > + * or a reasonable combination of these bits.
> > + */
> > +struct rte_flow_item_conntrack {
> > + uint32_t flags;
> > +};
> > +
> > +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_conntrack
> rte_flow_item_conntrack_mask = {
> > + .flags = 0xffffffff,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > @@ -2236,6 +2288,17 @@ enum rte_flow_action_type {
> > * See struct rte_flow_action_modify_field.
> > */
> > RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Enable tracking a TCP connection state.
> > + *
> > + * Send packet to HW connection tracking module for
> examination.
> > + *
> > + * See struct rte_flow_action_conntrack.
> > + */
> > + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -2828,6 +2891,134 @@ struct rte_flow_action_set_dscp {
> > */
> > struct rte_flow_shared_action;
> >
> > +/**
> > + * The state of a TCP connection.
> > + */
> > +enum rte_flow_conntrack_state {
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< SYN-ACK packet was seen. */
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< 3-way handshark was done. */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN packet was received to close the connection.
> */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< After second FIN, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > + /**< Second FIN was ACKed, connection was closed. */
> > +};
> > +
> > +/**
> > + * The last passed TCP packet flags of a connection.
> > + */
> > +enum rte_flow_conntrack_index {
> > + RTE_FLOW_CONNTRACK_INDEX_NONE = 0, /**< No Flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_SYN = (1 << 0), /**< With SYN
> flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_SYN_ACK = (1 << 1), /**< With
> SYN+ACK flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_FIN = (1 << 2), /**< With FIN
> flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_ACK = (1 << 3), /**< With ACK
> flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_RST = (1 << 4), /**< With RST
> flag. */
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * Configuration parameters for each direction of a TCP
> connection.
> > + */
> > +struct rte_flow_tcp_dir_param {
> > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> disable. */
> > + uint32_t close_initiated:1; /**< The FIN was sent by this
> direction. */
> > + uint32_t last_ack_seen:1;
> > + /**< An ACK packet has been received by this side. */
> > + uint32_t data_unacked:1;
> > + /**< If set, indicates that there is unacked data of the
> connection. */
> > + uint32_t sent_end;
> > + /**< Maximal value of sequence + payload length over sent
> > + * packets (next ACK from the opposite direction).
> > + */
> > + uint32_t reply_end;
> > + /**< Maximal value of (ACK + window size) over received
> packet + length
> > + * over sent packet (maximal sequence could be sent).
> > + */
> > + uint32_t max_win;
> > + /**< Maximal value of actual window size over sent packets.
> */
> > + uint32_t max_ack;
> > + /**< Maximal value of ACK over sent packets. */
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Configuration and initial state for the connection tracking
> module.
> > + * This structure could be used for both setting and query.
> Can we split the structure into set and query.
> Some of the fields seem to be relevant for a query.
> Also the names will be simpler and easier to understand that way.
>
To my understanding, it may be better to have all of them in a single structure. Different HW may have different ability for querying, and most of the fields are used both for query and create/update.
If some field is not supported for querying, the PMD could return the default value, e.g., 0. And if we split query and create/update:
1. Query struct may also needs to be the UNION of different HW's query capacity
2. the 2 structures will have a lot of common fields (most of them)
> > + */
> > +struct rte_flow_action_conntrack {
> > + uint16_t peer_port; /**< The peer port number, can be the
> same port. */
> > + uint32_t is_original_dir:1;
> > + /**< Direction of this connection when creating a flow,
> the value only
> > + * affects the subsequent flows creation.
> > + */
> > + uint32_t enable:1;
> > + /**< Enable / disable the conntrack HW module. When
> disabled, the
> > + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> > + * In this state the HW will act as passthrough.
> > + */
> We should be able to enable the block in HW implicitly based on the
> rte_flow_create.
> I don't think this is needed.
>
> > + uint32_t live_connection:1;
> > + /**< At least one ack was seen, after the connection was
> established. */
> > + uint32_t selective_ack:1;
> > + /**< Enable selective ACK on this connection. */
> > + uint32_t challenge_ack_passed:1;
> > + /**< A challenge ack has passed. */
> > + uint32_t last_direction:1;
> > + /**< 1: The last packet is seen that comes from the
> original direction.
> > + * 0: From the reply direction.
> > + */
> > + uint32_t liberal_mode:1;
> > + /**< No TCP check will be done except the state change. */
> > + enum rte_flow_conntrack_state state;
> initial_state or cur_state?
>
> > + /**< The current state of the connection. */
> > + uint8_t max_ack_window;
> > + /**< Scaling factor for maximal allowed ACK window. */
> > + uint8_t retransmission_limit;
> > + /**< Maximal allowed number of retransmission times. */
> > + struct rte_flow_tcp_dir_param original_dir;
> > + /**< TCP parameters of the original direction. */
> > + struct rte_flow_tcp_dir_param reply_dir;
> > + /**< TCP parameters of the reply direction. */
> > + uint16_t last_window;
> > + /**< The window value of the last packet passed this
> conntrack. */
> > + enum rte_flow_conntrack_index last_index;
> Do you mean rte_flow_conntrack_last_state - as in last state as seen
> by HW block?
> Or maybe it is the TCP flag and not state?
They are a little different. It should be the 2nd, the TCP flag of the last packets passed the connection tracking module, not the connection state.
>
> > + uint32_t last_seq;
> > + /**< The sequence of the last packet passed this conntrack.
> */
> > + uint32_t last_ack;
> > + /**< The acknowledgement of the last packet passed this
> conntrack. */
> > + uint32_t last_end;
> > + /**< The total value ACK + payload length of the last
> packet passed
> > + * this conntrack.
> > + */
> > +};
> > +
> > +/**
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Wrapper structure for the context update interface.
> > + * Ports cannot support updating, and the only valid solution is
> to
> > + * destroy the old context and create a new one instead.
> > + */
> In that case why not destroy the flow and create a new one?
I may not quite understand your question but will try to answer a bit more detailed and please comment.
The connection tracking action context will be created before any flow creation, then it will be used by the flows:
1. Conntrack action will be used for the flows of bi-directional traffic, and when creating it, the information of TCP packets from both directions are needed.
2. This conntrack action could be used for multiple flows over single port or dual ports.
3. The flow could be destroyed w/o destroy the action and then it could be reused by a new flow(if needed).
4. One direction flow could be destroyed w/o destroying the opposite direction flow.
So if the user want to destroy the action context, it should call the destroy interface directly. The action context will be still "alive" after one flow that using it is destroyed. It couldn't be destroyed together with the flow.
>
> > +struct rte_flow_modify_conntrack {
> > + struct rte_flow_action_conntrack new_ct;
> > + /**< New connection tracking parameters to be updated. */
> > + uint32_t direction:1; /**< The direction field will be
> updated. */
> > + uint32_t state:1;
> > + /**< All the other fields except direction will be updated.
> */
> > + uint32_t reserved:30; /**< Reserved bits for the future
> usage. */
> > +};
> > +
> > /**
> > * Field IDs for MODIFY_FIELD action.
> > */
> > --
> > 2.19.0.windows.1
> >
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item
2021-03-22 15:16 ` Andrew Rybchenko
@ 2021-04-07 7:43 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-07 7:43 UTC (permalink / raw)
To: Andrew Rybchenko, Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit; +Cc: dev
Hi Andrew,
Sorry for the late reply.
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, March 22, 2021 11:17 PM
> To: Bing Zhao <bingz@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-
> Contact-Thomas Monjalon <thomas@monjalon.net>;
> ferruh.yigit@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [RFC] ethdev: introduce conntrack flow action and item
>
> External email: Use caution opening links or attachments
>
>
> On 3/18/21 10:30 AM, Bing Zhao wrote:
> > This commit introduced the conntrack action and item.
> >
> > Usually the HW offloading is stateless. For some stateful
> offloading
> > like a TCP connection, HW module will help provide the ability of
> a
> > full offloading w/o SW participation after the connection was
> > established.
> >
> > The basic usage is that in the first flow the application should
> add
> > the conntrack action and in the following flow(s) the application
> > should use the conntrack item to match on the result.
> >
> > A TCP connection has two directions traffic. To set a conntrack
> action
> > context correctly, information from packets of both directions are
> > required.
> >
> > The conntrack action should be created on one port and supply the
> peer
> > port as a parameter to the action. After context creating, it
> could
> > only be used between the ports (dual-port mode) or a single port.
> The
> > application should modify the action via action_ctx_update
> interface
> > before each use in dual-port mode, in order to set the correct
> > direction for the following rte flow.
>
> Sorry, but "update interface before each use" sounds frightening.
> May be I simply don't understand all reasons behind.
Sorry for the uncleared description and the "each use in dual-port mode" should be "single-port mode". It is a suggestion but not a must, depending on the HW. But usually, since connection tracking is a bi-directional action and should be used by original and reply direction flows.
For dual-ports mode, usually the original traffic and reply traffic will come from different ports, the SW could distinguish them implicitly.
But in single-port mode, like in a gateway scenario, all the traffic are ingress and go into the same port, it is hard to distinguish the direction.
The update of the action will be just in SW level, or maybe in the HW level, depends on the NIC features.
If the next several flows to be created with this action context are for the same direction, then there is no need to call such API. Only when using it for an opposite direction, the interface will be called.
Also, if some changing of the action context is needed, like the seq/ACK/window, then the interface is also needed.
>
> > Query will be supported via action_ctx_query interface, about the
> > current packets information and connection status.
> >
> > For the packets received during the conntrack setup, it is
> suggested
> > to re-inject the packets in order to take full advantage of the
> > conntrack. Only the valid packets should pass the conntrack,
> packets
> > with invalid TCP information, like out of window, or with invalid
> > header, like malformed, should not pass.
> >
> > Testpmd command line example:
> >
> > set conntrack [index] enable is 1 last_seq is xxx last ack is xxx
> /
> > ... / orig_dir win_scale is xxx sent_end is xxx max_win is xxx ...
> /
> > rply_dir ... / end flow action_ctx [CTX] create ingress ... /
> > conntrack is [index] / end flow create 0 group X ingress
> patterns ...
> > / tcp / end actions action_ctx [CTX] / jump group Y / end flow
> create
> > 0 group Y ingress patterns ... / ct is [Valid] / end actions queue
> > index [hairpin queue] / end
@Andrew @Ori
Is such command line interface OK from your points of view?
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > lib/librte_ethdev/rte_flow.h | 191
> > +++++++++++++++++++++++++++++++++++
> > 1 file changed, 191 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h
> > b/lib/librte_ethdev/rte_flow.h index 669e677e91..b2e4f0751a 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -550,6 +550,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches conntrack state.
> > + *
> > + * See struct rte_flow_item_conntrack.
> > + */
> > + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -1654,6 +1663,49 @@ rte_flow_item_geneve_opt_mask = { };
> #endif
> >
> > +/**
> > + * The packet is with valid.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_STATE_VALID (1 << 0)
>
> It sounds like conntrack state is valid, but not packet is valid
> from conntrack point of view. May be:
> RTE_FLOW_CONNTRACK_FLAG_PKT_VALID? Or _VALID_PKT to go with _BAD_PKT.
The original idea of this is the state is valid after the packet integrity checking.
1. if some fields of the packet itself has an error, _BAD_PKT, and the packet will not be checked by the HW.
Then
2. if passed the HW connection tracking module, it should be considered _VALID.
3. if not passed the HW module checking, e.g., out of window, then it should be considered INVALID state or ERROR state.
But yes, the name should be more clear to describe themselves.
>
> > +/**
> > + * The state of the connection was changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_STATE_CHANGED (1 << 1)
> > +/**
> > + * Error state was detected on this packet for this connection.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_ERROR (1 << 2)
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_DISABLED (1 << 3)
> > +/**
> > + * The packet contains some bad field(s).
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_BAD_PKT (1 << 4)
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> > + *
> > + * Matches the state of a packet after it passed the connection
> > +tracking
> > + * examination. The state is a bit mask of one
> > +RTE_FLOW_CONNTRACK_FLAG*
> > + * or a reasonable combination of these bits.
> > + */
> > +struct rte_flow_item_conntrack {
> > + uint32_t flags;
> > +};
> > +
> > +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */ #ifndef
> > +__cplusplus static const struct rte_flow_item_conntrack
> > +rte_flow_item_conntrack_mask = {
> > + .flags = 0xffffffff,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > @@ -2236,6 +2288,17 @@ enum rte_flow_action_type {
> > * See struct rte_flow_action_modify_field.
> > */
> > RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Enable tracking a TCP connection state.
> > + *
> > + * Send packet to HW connection tracking module for
> examination.
> > + *
> > + * See struct rte_flow_action_conntrack.
> > + */
> > + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -2828,6 +2891,134 @@ struct rte_flow_action_set_dscp {
> > */
> > struct rte_flow_shared_action;
> >
> > +/**
> > + * The state of a TCP connection.
> > + */
> > +enum rte_flow_conntrack_state {
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< SYN-ACK packet was seen. */
>
> May I suggest to put comments before enum member. IMHO it is more
> readable. Comment after makes sense if it is on the same line,
> otherwise, it is better to use comments before code.
Sure, I will change it in the patch itself.
>
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< 3-way handshark was done. */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN packet was received to close the connection.
> */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< After second FIN, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > + /**< Second FIN was ACKed, connection was closed. */ };
> > +
> > +/**
> > + * The last passed TCP packet flags of a connection.
> > + */
> > +enum rte_flow_conntrack_index {
>
> Sorry, I don't understand why it is named conntrack_index.
May flag will be a better name instead of index? Or any other suggestion?
>
> > + RTE_FLOW_CONNTRACK_INDEX_NONE = 0, /**< No Flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_SYN = (1 << 0), /**< With SYN flag.
> */
> > + RTE_FLOW_CONNTRACK_INDEX_SYN_ACK = (1 << 1), /**< With
> SYN+ACK flag. */
> > + RTE_FLOW_CONNTRACK_INDEX_FIN = (1 << 2), /**< With FIN flag.
> */
> > + RTE_FLOW_CONNTRACK_INDEX_ACK = (1 << 3), /**< With ACK flag.
> */
> > + RTE_FLOW_CONNTRACK_INDEX_RST = (1 << 4), /**< With RST flag.
> */
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * Configuration parameters for each direction of a TCP
> connection.
> > + */
> > +struct rte_flow_tcp_dir_param {
> > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> disable. */
> > + uint32_t close_initiated:1; /**< The FIN was sent by this
> direction. */
> > + uint32_t last_ack_seen:1;
> > + /**< An ACK packet has been received by this side. */
>
> Same here about comments after fields.
Will change them all.
>
> > + uint32_t data_unacked:1;
> > + /**< If set, indicates that there is unacked data of the
> connection. */
> > + uint32_t sent_end;
> > + /**< Maximal value of sequence + payload length over sent
> > + * packets (next ACK from the opposite direction).
> > + */
> > + uint32_t reply_end;
> > + /**< Maximal value of (ACK + window size) over received
> packet + length
> > + * over sent packet (maximal sequence could be sent).
> > + */
> > + uint32_t max_win;
> > + /**< Maximal value of actual window size over sent packets.
> */
> > + uint32_t max_ack;
> > + /**< Maximal value of ACK over sent packets. */ };
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Configuration and initial state for the connection tracking
> module.
> > + * This structure could be used for both setting and query.
> > + */
> > +struct rte_flow_action_conntrack {
> > + uint16_t peer_port; /**< The peer port number, can be the
> same port. */
> > + uint32_t is_original_dir:1;
> > + /**< Direction of this connection when creating a flow, the
> value only
> > + * affects the subsequent flows creation.
> > + */
>
> and here tool
Will change them all in the patch.
>
> > + uint32_t enable:1;
> > + /**< Enable / disable the conntrack HW module. When disabled,
> the
> > + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> > + * In this state the HW will act as passthrough.
> > + */
>
> Does it disable entire conntrack HW module for all flows?
> It sounds like this. If so - confusing.
Yes, for all the flows that using this connection tracking context. But for the remaining flows which use other CT contexts, they will not be impacted.
>
> > + uint32_t live_connection:1;
> > + /**< At least one ack was seen, after the connection was
> established. */
> > + uint32_t selective_ack:1;
> > + /**< Enable selective ACK on this connection. */
> > + uint32_t challenge_ack_passed:1;
> > + /**< A challenge ack has passed. */
> > + uint32_t last_direction:1;
> > + /**< 1: The last packet is seen that comes from the original
> direction.
> > + * 0: From the reply direction.
> > + */
> > + uint32_t liberal_mode:1;
> > + /**< No TCP check will be done except the state change. */
> > + enum rte_flow_conntrack_state state;
> > + /**< The current state of the connection. */
> > + uint8_t max_ack_window;
> > + /**< Scaling factor for maximal allowed ACK window. */
> > + uint8_t retransmission_limit;
> > + /**< Maximal allowed number of retransmission times. */
> > + struct rte_flow_tcp_dir_param original_dir;
> > + /**< TCP parameters of the original direction. */
> > + struct rte_flow_tcp_dir_param reply_dir;
> > + /**< TCP parameters of the reply direction. */
> > + uint16_t last_window;
> > + /**< The window value of the last packet passed this
> conntrack. */
> > + enum rte_flow_conntrack_index last_index;
> > + uint32_t last_seq;
> > + /**< The sequence of the last packet passed this conntrack.
> */
> > + uint32_t last_ack;
> > + /**< The acknowledgement of the last packet passed this
> conntrack. */
> > + uint32_t last_end;
> > + /**< The total value ACK + payload length of the last packet
> passed
> > + * this conntrack.
> > + */
> > +};
> > +
> > +/**
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Wrapper structure for the context update interface.
> > + * Ports cannot support updating, and the only valid solution is
> to
> > + * destroy the old context and create a new one instead.
> > + */
> > +struct rte_flow_modify_conntrack {
> > + struct rte_flow_action_conntrack new_ct;
> > + /**< New connection tracking parameters to be updated. */
>
> and here
Will change them all in the formal patch.
>
> > + uint32_t direction:1; /**< The direction field will be
> updated. */
> > + uint32_t state:1;
> > + /**< All the other fields except direction will be updated.
> */
> > + uint32_t reserved:30; /**< Reserved bits for the future
> usage.
> > +*/ };
> > +
> > /**
> > * Field IDs for MODIFY_FIELD action.
> > */
> >
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH] ethdev: introduce conntrack flow action and item
2021-03-18 7:30 [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item Bing Zhao
2021-03-22 15:16 ` Andrew Rybchenko
2021-03-23 23:27 ` Ajit Khaparde
@ 2021-04-10 13:46 ` Bing Zhao
2021-04-15 16:24 ` Ori Kam
` (4 more replies)
2 siblings, 5 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-10 13:46 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko; +Cc: dev, ajit.khaparde
This commit introduced the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow the application should add
the conntrack action and in the following flow(s) the application
should use the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, information from packets of both directions
are required.
The conntrack action should be created on one port and supply the
peer port as a parameter to the action. After context creating, it
could only be used between the ports (dual-port mode) or a single
port. The application should modify the action via the API
"action_handle_update" only when before using it to create a flow
with opposite direction. This will help the driver to recognize the
direction of the flow to be created, especially in single port mode.
The traffic from both directions will go through the same port if
the application works as an "forwarding engine" but not a end point.
There is no need to call the update interface if the subsequent flows
have nothing to be changed.
Query will be supported via action_ctx_query interface, about the
current packets information and connection status. Tha fields
query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to take full advantage of the
conntrack. Only the valid packets should pass the conntrack, packets
with invalid TCP information, like out of window, or with invalid
header, like malformed, should not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++++++++++++++++++
1 file changed, 195 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 6cc57136ac..d506377f7e 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * See struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is with valid state after conntrack checking.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
+/**
+ * The state of the connection was changed.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
+/**
+ * Error is detected on this packet for this connection and
+ * an invalid state is set.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
+/**
+ * The packet contains some bad field(s) and cannot continue
+ * with the conntrack module checking.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2267,6 +2321,17 @@ enum rte_flow_action_type {
* See struct rte_flow_action_modify_field.
*/
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * Send packet to HW connection tracking module for examination.
+ *
+ * See struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2859,6 +2924,136 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_shared_action;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ /**< SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /**< 3-way handshark was done. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /**< First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /**< First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /**< Second FIN was received, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /**< Second FIN was ACKed, connection was closed. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_tcp_last_index {
+ RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ */
+struct rte_flow_tcp_dir_param {
+ uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
+ uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
+ /**< An ACK packet has been received by this side. */
+ uint32_t last_ack_seen:1;
+ /**< If set, indicates that there is unacked data of the connection. */
+ uint32_t data_unacked:1;
+ /**< Maximal value of sequence + payload length over sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t sent_end;
+ /**< Maximal value of (ACK + window size) over received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t reply_end;
+ /**< Maximal value of actual window size over sent packets. */
+ uint32_t max_win;
+ /**< Maximal value of ACK over sent packets. */
+ uint32_t max_ack;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ */
+struct rte_flow_action_conntrack {
+ uint16_t peer_port; /**< The peer port number, can be the same port. */
+ /**< Direction of this connection when creating a flow, the value only
+ * affects the subsequent flows creation.
+ */
+ uint32_t is_original_dir:1;
+ /**< Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ * It only affects this conntrack object in the HW without any effect
+ * to the other objects.
+ */
+ uint32_t enable:1;
+ /**< At least one ack was seen, after the connection was established. */
+ uint32_t live_connection:1;
+ /**< Enable selective ACK on this connection. */
+ uint32_t selective_ack:1;
+ /**< A challenge ack has passed. */
+ uint32_t challenge_ack_passed:1;
+ /**< 1: The last packet is seen that comes from the original direction.
+ * 0: From the reply direction.
+ */
+ uint32_t last_direction:1;
+ /**< No TCP check will be done except the state change. */
+ uint32_t liberal_mode:1;
+ /**< The current state of the connection. */
+ enum rte_flow_conntrack_state state;
+ /**< Scaling factor for maximal allowed ACK window. */
+ uint8_t max_ack_window;
+ /**< Maximal allowed number of retransmission times. */
+ uint8_t retransmission_limit;
+ /**< TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /**< TCP parameters of the reply direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /**< The window value of the last packet passed this conntrack. */
+ uint16_t last_window;
+ enum rte_flow_conntrack_tcp_last_index last_index;
+ /**< The sequence of the last packet passed this conntrack. */
+ uint32_t last_seq;
+ /**< The acknowledgement of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**< The total value ACK + payload length of the last packet passed
+ * this conntrack.
+ */
+ uint32_t last_end;
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ /**< New connection tracking parameters to be updated. */
+ struct rte_flow_action_conntrack new_ct;
+ uint32_t direction:1; /**< The direction field will be updated. */
+ /**< All the other fields except direction will be updated. */
+ uint32_t state:1;
+ uint32_t reserved:30; /**< Reserved bits for the future usage. */
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.30.0.windows.2
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: introduce conntrack flow action and item
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
@ 2021-04-15 16:24 ` Ori Kam
2021-04-15 16:44 ` Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 0/2] " Bing Zhao
` (3 subsequent siblings)
4 siblings, 1 reply; 45+ messages in thread
From: Ori Kam @ 2021-04-15 16:24 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Bing
I'm fine with this patch but you are missing the documentation part:
1. doc/guides/prog_guide/rte_flow.rst
2. doc/guides/rel_notes/release_21_05.rst
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Saturday, April 10, 2021 4:47 PM
> To: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> Subject: [PATCH] ethdev: introduce conntrack flow action and item
>
> This commit introduced the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow the application should add
> the conntrack action and in the following flow(s) the application
> should use the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, information from packets of both directions
> are required.
>
> The conntrack action should be created on one port and supply the
> peer port as a parameter to the action. After context creating, it
> could only be used between the ports (dual-port mode) or a single
> port. The application should modify the action via the API
> "action_handle_update" only when before using it to create a flow
> with opposite direction. This will help the driver to recognize the
> direction of the flow to be created, especially in single port mode.
> The traffic from both directions will go through the same port if
> the application works as an "forwarding engine" but not a end point.
> There is no need to call the update interface if the subsequent flows
> have nothing to be changed.
>
> Query will be supported via action_ctx_query interface, about the
> current packets information and connection status. Tha fields
> query capabilities depends on the HW.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to take full advantage of the
> conntrack. Only the valid packets should pass the conntrack, packets
> with invalid TCP information, like out of window, or with invalid
> header, like malformed, should not pass.
>
> Naming and definition:
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_co
> nntrack_tcp.h
> https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_
> tcp.c
>
> Other reference:
> https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++++++++++++++++++
> 1 file changed, 195 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 6cc57136ac..d506377f7e 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * See struct rte_flow_item_conntrack.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
>
> /**
> @@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +/**
> + * The packet is with valid state after conntrack checking.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
> +/**
> + * The state of the connection was changed.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> +/**
> + * Error is detected on this packet for this connection and
> + * an invalid state is set.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
> +/**
> + * The packet contains some bad field(s) and cannot continue
> + * with the conntrack module checking.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask =
> {
> + .flags = 0xffffffff,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> @@ -2267,6 +2321,17 @@ enum rte_flow_action_type {
> * See struct rte_flow_action_modify_field.
> */
> RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * Send packet to HW connection tracking module for examination.
> + *
> + * See struct rte_flow_action_conntrack.
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2859,6 +2924,136 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_shared_action;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< 3-way handshark was done. */
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< Second FIN was received, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< Second FIN was ACKed, connection was closed. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_tcp_last_index {
> + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK
> flag. */
> + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
> + uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
> + /**< An ACK packet has been received by this side. */
> + uint32_t last_ack_seen:1;
> + /**< If set, indicates that there is unacked data of the connection. */
> + uint32_t data_unacked:1;
> + /**< Maximal value of sequence + payload length over sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t sent_end;
> + /**< Maximal value of (ACK + window size) over received packet +
> length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t reply_end;
> + /**< Maximal value of actual window size over sent packets. */
> + uint32_t max_win;
> + /**< Maximal value of ACK over sent packets. */
> + uint32_t max_ack;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
> + */
> +struct rte_flow_action_conntrack {
> + uint16_t peer_port; /**< The peer port number, can be the same port.
> */
> + /**< Direction of this connection when creating a flow, the value only
> + * affects the subsequent flows creation.
> + */
> + uint32_t is_original_dir:1;
> + /**< Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + * It only affects this conntrack object in the HW without any effect
> + * to the other objects.
> + */
> + uint32_t enable:1;
> + /**< At least one ack was seen, after the connection was established.
> */
> + uint32_t live_connection:1;
> + /**< Enable selective ACK on this connection. */
> + uint32_t selective_ack:1;
> + /**< A challenge ack has passed. */
> + uint32_t challenge_ack_passed:1;
> + /**< 1: The last packet is seen that comes from the original direction.
> + * 0: From the reply direction.
> + */
> + uint32_t last_direction:1;
> + /**< No TCP check will be done except the state change. */
> + uint32_t liberal_mode:1;
> + /**< The current state of the connection. */
> + enum rte_flow_conntrack_state state;
> + /**< Scaling factor for maximal allowed ACK window. */
> + uint8_t max_ack_window;
> + /**< Maximal allowed number of retransmission times. */
> + uint8_t retransmission_limit;
> + /**< TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /**< TCP parameters of the reply direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /**< The window value of the last packet passed this conntrack. */
> + uint16_t last_window;
> + enum rte_flow_conntrack_tcp_last_index last_index;
> + /**< The sequence of the last packet passed this conntrack. */
> + uint32_t last_seq;
> + /**< The acknowledgement of the last packet passed this conntrack. */
> + uint32_t last_ack;
> + /**< The total value ACK + payload length of the last packet passed
> + * this conntrack.
> + */
> + uint32_t last_end;
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
> +struct rte_flow_modify_conntrack {
> + /**< New connection tracking parameters to be updated. */
> + struct rte_flow_action_conntrack new_ct;
> + uint32_t direction:1; /**< The direction field will be updated. */
> + /**< All the other fields except direction will be updated. */
> + uint32_t state:1;
> + uint32_t reserved:30; /**< Reserved bits for the future usage. */
> +};
> +
> /**
> * Field IDs for MODIFY_FIELD action.
> */
> --
> 2.30.0.windows.2
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 0/2] ethdev: introduce conntrack flow action and item
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
2021-04-15 16:24 ` Ori Kam
@ 2021-04-15 16:41 ` Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 1/2] " Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
` (2 subsequent siblings)
4 siblings, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-15 16:41 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko; +Cc: dev, ajit.khaparde
Depends-on: series-16419 ("Change shared action API to action handle API")
This patch set includes the conntrack action and item definitions as
well as the testpmd CLI proposal.
---
v2: add testpmd CLI proposal
---
Bing Zhao (2):
ethdev: introduce conntrack flow action and item
app/testpmd: add CLI for conntrack
app/test-pmd/cmdline.c | 354 +++++++++++++++++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 +++++++++
app/test-pmd/config.c | 65 ++++++-
app/test-pmd/testpmd.h | 2 +
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++
6 files changed, 709 insertions(+), 1 deletion(-)
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 0/2] " Bing Zhao
@ 2021-04-15 16:41 ` Bing Zhao
2021-04-16 10:49 ` Thomas Monjalon
2021-04-16 12:41 ` Ori Kam
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack Bing Zhao
1 sibling, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-15 16:41 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko; +Cc: dev, ajit.khaparde
This commit introduced the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow the application should add
the conntrack action and in the following flow(s) the application
should use the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, information from packets of both directions
are required.
The conntrack action should be created on one port and supply the
peer port as a parameter to the action. After context creating, it
could only be used between the ports (dual-port mode) or a single
port. The application should modify the action via the API
"action_handle_update" only when before using it to create a flow
with opposite direction. This will help the driver to recognize the
direction of the flow to be created, especially in single port mode.
The traffic from both directions will go through the same port if
the application works as an "forwarding engine" but not a end point.
There is no need to call the update interface if the subsequent flows
have nothing to be changed.
Query will be supported via action_ctx_query interface, about the
current packets information and connection status. Tha fields
query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to take full advantage of the
conntrack. Only the valid packets should pass the conntrack, packets
with invalid TCP information, like out of window, or with invalid
header, like malformed, should not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 27a161559d..0af601d508 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+ MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
};
/** Generate flow_action[] entry. */
@@ -186,6 +187,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
* indirect action handle.
*/
MK_FLOW_ACTION(INDIRECT, 0),
+ MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
};
int
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 91ae25b1da..024d1a2026 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * See struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is with valid state after conntrack checking.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
+/**
+ * The state of the connection was changed.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
+/**
+ * Error is detected on this packet for this connection and
+ * an invalid state is set.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
+/**
+ * The packet contains some bad field(s) and cannot continue
+ * with the conntrack module checking.
+ */
+#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2277,6 +2331,17 @@ enum rte_flow_action_type {
* same port or across different ports.
*/
RTE_FLOW_ACTION_TYPE_INDIRECT,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * Send packet to HW connection tracking module for examination.
+ *
+ * See struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2875,6 +2940,136 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_action_handle;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ /**< SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /**< 3-way handshark was done. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /**< First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /**< First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /**< Second FIN was received, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /**< Second FIN was ACKed, connection was closed. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_tcp_last_index {
+ RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ */
+struct rte_flow_tcp_dir_param {
+ uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
+ uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
+ /**< An ACK packet has been received by this side. */
+ uint32_t last_ack_seen:1;
+ /**< If set, indicates that there is unacked data of the connection. */
+ uint32_t data_unacked:1;
+ /**< Maximal value of sequence + payload length over sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t sent_end;
+ /**< Maximal value of (ACK + window size) over received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t reply_end;
+ /**< Maximal value of actual window size over sent packets. */
+ uint32_t max_win;
+ /**< Maximal value of ACK over sent packets. */
+ uint32_t max_ack;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ */
+struct rte_flow_action_conntrack {
+ uint16_t peer_port; /**< The peer port number, can be the same port. */
+ /**< Direction of this connection when creating a flow, the value only
+ * affects the subsequent flows creation.
+ */
+ uint32_t is_original_dir:1;
+ /**< Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ * It only affects this conntrack object in the HW without any effect
+ * to the other objects.
+ */
+ uint32_t enable:1;
+ /**< At least one ack was seen, after the connection was established. */
+ uint32_t live_connection:1;
+ /**< Enable selective ACK on this connection. */
+ uint32_t selective_ack:1;
+ /**< A challenge ack has passed. */
+ uint32_t challenge_ack_passed:1;
+ /**< 1: The last packet is seen that comes from the original direction.
+ * 0: From the reply direction.
+ */
+ uint32_t last_direction:1;
+ /**< No TCP check will be done except the state change. */
+ uint32_t liberal_mode:1;
+ /**< The current state of the connection. */
+ enum rte_flow_conntrack_state state;
+ /**< Scaling factor for maximal allowed ACK window. */
+ uint8_t max_ack_window;
+ /**< Maximal allowed number of retransmission times. */
+ uint8_t retransmission_limit;
+ /**< TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /**< TCP parameters of the reply direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /**< The window value of the last packet passed this conntrack. */
+ uint16_t last_window;
+ enum rte_flow_conntrack_tcp_last_index last_index;
+ /**< The sequence of the last packet passed this conntrack. */
+ uint32_t last_seq;
+ /**< The acknowledgement of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**< The total value ACK + payload length of the last packet passed
+ * this conntrack.
+ */
+ uint32_t last_end;
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ /**< New connection tracking parameters to be updated. */
+ struct rte_flow_action_conntrack new_ct;
+ uint32_t direction:1; /**< The direction field will be updated. */
+ /**< All the other fields except direction will be updated. */
+ uint32_t state:1;
+ uint32_t reserved:30; /**< Reserved bits for the future usage. */
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 0/2] " Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 1/2] " Bing Zhao
@ 2021-04-15 16:41 ` Bing Zhao
2021-04-16 8:46 ` Ori Kam
1 sibling, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-15 16:41 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko; +Cc: dev, ajit.khaparde
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
app/test-pmd/cmdline.c | 354 ++++++++++++++++++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 ++++++++++
app/test-pmd/config.c | 65 ++++++-
app/test-pmd/testpmd.h | 2 +
4 files changed, 512 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c28a3d2e5d..58ab7191d6 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13618,6 +13618,358 @@ cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
},
};
+/** Set connection tracking object common details */
+struct cmd_set_conntrack_common_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t common;
+ cmdline_fixed_string_t peer;
+ cmdline_fixed_string_t is_orig;
+ cmdline_fixed_string_t enable;
+ cmdline_fixed_string_t live;
+ cmdline_fixed_string_t sack;
+ cmdline_fixed_string_t cack;
+ cmdline_fixed_string_t last_dir;
+ cmdline_fixed_string_t liberal;
+ cmdline_fixed_string_t state;
+ cmdline_fixed_string_t max_ack_win;
+ cmdline_fixed_string_t retrans;
+ cmdline_fixed_string_t last_win;
+ cmdline_fixed_string_t last_seq;
+ cmdline_fixed_string_t last_ack;
+ cmdline_fixed_string_t last_end;
+ cmdline_fixed_string_t last_index;
+ uint8_t stat;
+ uint8_t factor;
+ uint16_t peer_port;
+ uint32_t is_original;
+ uint32_t en;
+ uint32_t is_live;
+ uint32_t s_ack;
+ uint32_t c_ack;
+ uint32_t ld;
+ uint32_t lb;
+ uint8_t re_num;
+ uint8_t li;
+ uint16_t lw;
+ uint32_t ls;
+ uint32_t la;
+ uint32_t le;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ common, "com");
+cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer, "peer");
+cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer_port, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_orig, "is_orig");
+cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_original, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ enable, "enable");
+cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ en, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_live =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ live, "live");
+cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_live, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ sack, "sack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ s_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ cack, "cack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ c_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_dir, "last_dir");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ld, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ liberal, "liberal");
+cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lb, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_state =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ state, "state");
+cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ stat, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ max_ack_win, "max_ack_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_max_ackwin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ factor, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ retrans, "r_lim");
+cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ re_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_win, "last_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lw, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_seq, "last_seq");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ls, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_ack, "last_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ la, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_end, "last_end");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ le, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_index =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_index, "last_index");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_index_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ li, RTE_UINT8);
+
+static void cmd_set_conntrack_common_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_common_result *res = parsed_result;
+
+ /* No need to swap to big endian. */
+ conntrack_context.peer_port = res->peer_port;
+ conntrack_context.is_original_dir = res->is_original;
+ conntrack_context.enable = res->en;
+ conntrack_context.live_connection = res->is_live;
+ conntrack_context.selective_ack = res->s_ack;
+ conntrack_context.challenge_ack_passed = res->c_ack;
+ conntrack_context.last_direction = res->ld;
+ conntrack_context.liberal_mode = res->lb;
+ conntrack_context.state = (enum rte_flow_conntrack_state)res->stat;
+ conntrack_context.max_ack_window = res->factor;
+ conntrack_context.retransmission_limit = res->re_num;
+ conntrack_context.last_window = res->lw;
+ conntrack_context.last_index =
+ (enum rte_flow_conntrack_tcp_last_index)res->li;
+ conntrack_context.last_seq = res->ls;
+ conntrack_context.last_ack = res->la;
+ conntrack_context.last_end = res->le;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_common = {
+ .f = cmd_set_conntrack_common_parsed,
+ .data = NULL,
+ .help_str = "set conntrack com peer <port_id> is_orig <dir> enable <en>"
+ " live <ack_seen> sack <en> cack <passed> last_dir <dir>"
+ " liberal <en> state <s> max_ack_win <factor> r_lim <num>"
+ " last_win <win> last_seq <seq> last_ack <ack> last_end <end>"
+ " last_index <flag>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_common_peer,
+ (void *)&cmd_set_conntrack_common_peer_value,
+ (void *)&cmd_set_conntrack_common_is_orig,
+ (void *)&cmd_set_conntrack_common_is_orig_value,
+ (void *)&cmd_set_conntrack_common_enable,
+ (void *)&cmd_set_conntrack_common_enable_value,
+ (void *)&cmd_set_conntrack_common_live,
+ (void *)&cmd_set_conntrack_common_live_value,
+ (void *)&cmd_set_conntrack_common_sack,
+ (void *)&cmd_set_conntrack_common_sack_value,
+ (void *)&cmd_set_conntrack_common_cack,
+ (void *)&cmd_set_conntrack_common_cack_value,
+ (void *)&cmd_set_conntrack_common_last_dir,
+ (void *)&cmd_set_conntrack_common_last_dir_value,
+ (void *)&cmd_set_conntrack_common_liberal,
+ (void *)&cmd_set_conntrack_common_liberal_value,
+ (void *)&cmd_set_conntrack_common_state,
+ (void *)&cmd_set_conntrack_common_state_value,
+ (void *)&cmd_set_conntrack_common_max_ackwin,
+ (void *)&cmd_set_conntrack_common_max_ackwin_value,
+ (void *)&cmd_set_conntrack_common_retrans,
+ (void *)&cmd_set_conntrack_common_retrans_value,
+ (void *)&cmd_set_conntrack_common_last_win,
+ (void *)&cmd_set_conntrack_common_last_win_value,
+ (void *)&cmd_set_conntrack_common_last_seq,
+ (void *)&cmd_set_conntrack_common_last_seq_value,
+ (void *)&cmd_set_conntrack_common_last_ack,
+ (void *)&cmd_set_conntrack_common_last_ack_value,
+ (void *)&cmd_set_conntrack_common_last_end,
+ (void *)&cmd_set_conntrack_common_last_end_value,
+ (void *)&cmd_set_conntrack_common_last_index,
+ (void *)&cmd_set_conntrack_common_last_index_value,
+ NULL,
+ },
+};
+
+/** Set connection tracking object both directions' details */
+struct cmd_set_conntrack_dir_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t dir;
+ cmdline_fixed_string_t scale;
+ cmdline_fixed_string_t fin;
+ cmdline_fixed_string_t ack_seen;
+ cmdline_fixed_string_t unack;
+ cmdline_fixed_string_t sent_end;
+ cmdline_fixed_string_t reply_end;
+ cmdline_fixed_string_t max_win;
+ cmdline_fixed_string_t max_ack;
+ uint32_t factor;
+ uint32_t f;
+ uint32_t as;
+ uint32_t un;
+ uint32_t se;
+ uint32_t re;
+ uint32_t mw;
+ uint32_t ma;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ dir, "orig#rply");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ scale, "scale");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ factor, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ fin, "fin");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ f, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ack_seen, "acked");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ as, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ unack, "unack_data");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ un, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ sent_end, "sent_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ se, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ reply_end, "reply_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ re, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_win, "max_win");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ mw, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_ack, "max_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ma, RTE_UINT32);
+
+static void cmd_set_conntrack_dir_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_dir_result *res = parsed_result;
+ struct rte_flow_tcp_dir_param *dir = NULL;
+
+ if (strcmp(res->dir, "orig") == 0)
+ dir = &conntrack_context.original_dir;
+ else if (strcmp(res->dir, "rply") == 0)
+ dir = &conntrack_context.reply_dir;
+ else
+ return;
+ dir->scale = res->factor;
+ dir->close_initiated = res->f;
+ dir->last_ack_seen = res->as;
+ dir->data_unacked = res->un;
+ dir->sent_end = res->se;
+ dir->reply_end = res->re;
+ dir->max_ack = res->ma;
+ dir->max_win = res->mw;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_dir = {
+ .f = cmd_set_conntrack_dir_parsed,
+ .data = NULL,
+ .help_str = "set conntrack orig|rply scale <factor> fin <sent>"
+ " acked <seen> unack_data <unack> sent_end <sent>"
+ " reply_end <reply> max_win <win> max_ack <ack>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_dir_dir,
+ (void *)&cmd_set_conntrack_dir_scale,
+ (void *)&cmd_set_conntrack_dir_scale_value,
+ (void *)&cmd_set_conntrack_dir_fin,
+ (void *)&cmd_set_conntrack_dir_fin_value,
+ (void *)&cmd_set_conntrack_dir_ack,
+ (void *)&cmd_set_conntrack_dir_ack_value,
+ (void *)&cmd_set_conntrack_dir_unack_data,
+ (void *)&cmd_set_conntrack_dir_unack_data_value,
+ (void *)&cmd_set_conntrack_dir_sent_end,
+ (void *)&cmd_set_conntrack_dir_sent_end_value,
+ (void *)&cmd_set_conntrack_dir_reply_end,
+ (void *)&cmd_set_conntrack_dir_reply_end_value,
+ (void *)&cmd_set_conntrack_dir_max_win,
+ (void *)&cmd_set_conntrack_dir_max_win_value,
+ (void *)&cmd_set_conntrack_dir_max_ack,
+ (void *)&cmd_set_conntrack_dir_max_ack_value,
+ NULL,
+ },
+};
+
/* Strict link priority scheduling mode setting */
static void
cmd_strict_link_prio_parsed(
@@ -17117,6 +17469,8 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
(cmdline_parse_inst_t *)&cmd_ddp_add,
(cmdline_parse_inst_t *)&cmd_ddp_del,
(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d83dec942a..fc5e31be5e 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -289,6 +289,7 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_CONNTRACK,
/* Validate/create actions. */
ACTIONS,
@@ -427,6 +428,10 @@ enum index {
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_WIDTH,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -565,6 +570,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
struct mplsoudp_decap_conf mplsoudp_decap_conf;
+struct rte_flow_action_conntrack conntrack_context;
+
#define ACTION_SAMPLE_ACTIONS_NUM 10
#define RAW_SAMPLE_CONFS_MAX_NUM 8
/** Storage for struct rte_flow_action_sample including external data. */
@@ -956,6 +963,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_CONNTRACK,
END_SET,
ZERO,
};
@@ -1370,6 +1378,8 @@ static const enum index next_action[] = {
ACTION_SAMPLE,
ACTION_INDIRECT,
ACTION_MODIFY_FIELD,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
ZERO,
};
@@ -1638,6 +1648,13 @@ static const enum index action_modify_field_src[] = {
ZERO,
};
+static const enum index action_update_conntrack[] = {
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1728,6 +1745,10 @@ static int
parse_vc_modify_field_id(struct context *ctx, const struct token *token,
const char *str, unsigned int len, void *buf,
unsigned int size);
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size);
static int parse_destroy(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -3373,6 +3394,13 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "conntrack state",
+ .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -4471,6 +4499,34 @@ static const struct token token_list[] = {
.call = parse_vc_action_sample_index,
.comp = comp_set_sample_index,
},
+ [ACTION_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "create a conntrack object",
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_action_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE] = {
+ .name = "conntrack_update",
+ .help = "update a conntrack object",
+ .next = NEXT(action_update_conntrack),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_modify_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE_DIR] = {
+ .name = "dir",
+ .help = "update a conntrack object direction",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
+ [ACTION_CONNTRACK_UPDATE_CTX] = {
+ .name = "ctx",
+ .help = "update a conntrack object context",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
@@ -6277,6 +6333,42 @@ parse_vc_modify_field_id(struct context *ctx, const struct token *token,
return len;
}
+/** Parse the conntrack update, not a rte_flow_action. */
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size)
+{
+ struct buffer *out = buf;
+ struct rte_flow_modify_conntrack *ct_modify = NULL;
+
+ (void)size;
+ if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
+ ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
+ return -1;
+ /* Token name must match. */
+ if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+ return -1;
+ ct_modify = (struct rte_flow_modify_conntrack *)out->args.vc.data;
+ /* Nothing else to do if there is no buffer. */
+ if (!out)
+ return len;
+ if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
+ ct_modify->new_ct.is_original_dir =
+ conntrack_context.is_original_dir;
+ ct_modify->direction = 1;
+ } else {
+ uint32_t old_dir;
+
+ old_dir = ct_modify->new_ct.is_original_dir;
+ memcpy(&ct_modify->new_ct, &conntrack_context,
+ sizeof(conntrack_context));
+ ct_modify->new_ct.is_original_dir = old_dir;
+ ct_modify->state = 1;
+ }
+ return len;
+}
+
/** Parse tokens for destroy command. */
static int
parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 1eec0612a4..06143a7501 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1483,6 +1483,11 @@ port_action_handle_create(portid_t port_id, uint32_t id,
pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
age->context = &pia->age_type;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
+ struct rte_flow_action_conntrack *ct =
+ (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf);
+
+ memcpy(ct, &conntrack_context, sizeof(*ct));
}
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x22, sizeof(error));
@@ -1564,11 +1569,24 @@ port_action_handle_update(portid_t port_id, uint32_t id,
{
struct rte_flow_error error;
struct rte_flow_action_handle *action_handle;
+ struct port_indirect_action *pia;
+ const void *update;
action_handle = port_action_handle_get_by_id(port_id, id);
if (!action_handle)
return -EINVAL;
- if (rte_flow_action_handle_update(port_id, action_handle, action,
+ pia = action_get_by_id(port_id, id);
+ if (!pia)
+ return -EINVAL;
+ switch (pia->type) {
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ update = action->conf;
+ break;
+ default:
+ update = action;
+ break;
+ }
+ if (rte_flow_action_handle_update(port_id, action_handle, update,
&error)) {
return port_flow_complain(&error);
}
@@ -1621,6 +1639,51 @@ port_action_handle_query(portid_t port_id, uint32_t id)
}
data = NULL;
break;
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ if (!ret) {
+ struct rte_flow_action_conntrack *ct = data;
+
+ printf("Conntrack Context:\n"
+ " Peer: %u, Flow dir: %s, Enable: %u\n"
+ " Live: %u, SACK: %u, CACK: %u\n"
+ " Packet dir: %s, Liberal: %u, State: %u\n"
+ " Factor: %u, Retrans: %u, TCP flags: %u\n"
+ " Last Seq: %u, Last ACK: %u\n"
+ " Last Win: %u, Last End: %u\n",
+ ct->peer_port,
+ ct->is_original_dir ? "Original" : "Reply",
+ ct->enable, ct->live_connection,
+ ct->selective_ack, ct->challenge_ack_passed,
+ ct->last_direction ? "Original" : "Reply",
+ ct->liberal_mode, ct->state,
+ ct->max_ack_window, ct->retransmission_limit,
+ ct->last_index, ct->last_seq, ct->last_ack,
+ ct->last_window, ct->last_end);
+ printf(" Original Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->original_dir.scale,
+ ct->original_dir.close_initiated,
+ ct->original_dir.last_ack_seen,
+ ct->original_dir.data_unacked,
+ ct->original_dir.sent_end,
+ ct->original_dir.reply_end,
+ ct->original_dir.max_win,
+ ct->original_dir.max_ack);
+ printf(" Reply Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->reply_dir.scale,
+ ct->reply_dir.close_initiated,
+ ct->reply_dir.last_ack_seen,
+ ct->reply_dir.data_unacked,
+ ct->reply_dir.sent_end, ct->reply_dir.reply_end,
+ ct->reply_dir.max_win, ct->reply_dir.max_ack);
+ }
+ data = NULL;
+ break;
default:
printf("Indirect action %u (type: %d) on port %u doesn't"
" support query\n", id, pia->type, port_id);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d1eaaadb17..d7528f9cb5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf mplsoudp_decap_conf;
extern enum rte_eth_rx_mq_mode rx_mq_mode;
+extern struct rte_flow_action_conntrack conntrack_context;
+
static inline unsigned int
lcore_num(void)
{
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: introduce conntrack flow action and item
2021-04-15 16:24 ` Ori Kam
@ 2021-04-15 16:44 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-15 16:44 UTC (permalink / raw)
To: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Ori,
> -----Original Message-----
> From: Ori Kam <orika@nvidia.com>
> Sent: Friday, April 16, 2021 12:25 AM
> To: Bing Zhao <bingz@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; ajit.khaparde@broadcom.com
> Subject: RE: [PATCH] ethdev: introduce conntrack flow action and
> item
>
> Hi Bing
> I'm fine with this patch but you are missing the documentation part:
> 1. doc/guides/prog_guide/rte_flow.rst
> 2. doc/guides/rel_notes/release_21_05.rst
Thanks for the comments and reminding. I will update the doc in the next version v3.
For v2, only testpmd CLI part is appended.
It may need the reviewers help to review and confirm if my current CLI proposal is OK.
>
> > -----Original Message-----
> > From: Bing Zhao <bingz@nvidia.com>
> > Sent: Saturday, April 10, 2021 4:47 PM
> > To: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> > Subject: [PATCH] ethdev: introduce conntrack flow action and item
> >
> > This commit introduced the conntrack action and item.
> >
> > Usually the HW offloading is stateless. For some stateful
> offloading
> > like a TCP connection, HW module will help provide the ability of
> a
> > full offloading w/o SW participation after the connection was
> > established.
> >
> > The basic usage is that in the first flow the application should
> add
> > the conntrack action and in the following flow(s) the application
> > should use the conntrack item to match on the result.
> >
> > A TCP connection has two directions traffic. To set a conntrack
> action
> > context correctly, information from packets of both directions are
> > required.
> >
> > The conntrack action should be created on one port and supply the
> peer
> > port as a parameter to the action. After context creating, it
> could
> > only be used between the ports (dual-port mode) or a single port.
> The
> > application should modify the action via the API
> > "action_handle_update" only when before using it to create a flow
> with
> > opposite direction. This will help the driver to recognize the
> > direction of the flow to be created, especially in single port
> mode.
> > The traffic from both directions will go through the same port if
> the
> > application works as an "forwarding engine" but not a end point.
> > There is no need to call the update interface if the subsequent
> flows
> > have nothing to be changed.
> >
> > Query will be supported via action_ctx_query interface, about the
> > current packets information and connection status. Tha fields
> query
> > capabilities depends on the HW.
> >
> > For the packets received during the conntrack setup, it is
> suggested
> > to re-inject the packets in order to take full advantage of the
> > conntrack. Only the valid packets should pass the conntrack,
> packets
> > with invalid TCP information, like out of window, or with invalid
> > header, like malformed, should not pass.
> >
> > Naming and definition:
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Finclude%2Fuapi%2Flinux%2F
> ne
> >
> tfilter%2Fnf_co&data=04%7C01%7Cbingz%40nvidia.com%7C84e54f991016
> 4c
> >
> 7214a008d9002afd2e%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C6375
> 41
> >
> 006893523890%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> uM
> >
> zIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=d8QRJrA51j64m8VPY
> fV
> > pATTl5JrkD0nSUpQOZKco8T4%3D&reserved=0
> > nntrack_tcp.h
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fnet%2Fnetfilter%2Fnf_conn
> tr
> >
> ack_proto_&data=04%7C01%7Cbingz%40nvidia.com%7C84e54f9910164c721
> 4a
> >
> 008d9002afd2e%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637541006
> 89
> >
> 3523890%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIi
> LC
> >
> JBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lkZO7RQQsu3I1E0JWPhKWZ
> rA
> > VpOjlwWLA8WpJQUlme0%3D&reserved=0
> > tcp.c
> >
> > Other reference:
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww
> w.
> >
> usenix.org%2Flegacy%2Fevents%2Fsec01%2Finvitedtalks%2Frooij.pdf&
> da
> >
> ta=04%7C01%7Cbingz%40nvidia.com%7C84e54f9910164c7214a008d9002afd2e%7
> C4
> >
> 3083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637541006893523890%7CUnkno
> wn
> > %7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWw
> iLCJ
> >
> XVCI6Mn0%3D%7C1000&sdata=nsVJW1dJVZVRLBmkSnTtHBatAsavvEvL9vHFkIR
> IY
> > 8o%3D&reserved=0
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > lib/librte_ethdev/rte_flow.h | 195
> > +++++++++++++++++++++++++++++++++++
> > 1 file changed, 195 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h
> > b/lib/librte_ethdev/rte_flow.h index 6cc57136ac..d506377f7e 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches conntrack state.
> > + *
> > + * See struct rte_flow_item_conntrack.
> > + */
> > + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = { };
> #endif
> >
> > +/**
> > + * The packet is with valid state after conntrack checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
> > +/**
> > + * The state of the connection was changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> > +/**
> > + * Error is detected on this packet for this connection and
> > + * an invalid state is set.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
> > +/**
> > + * The packet contains some bad field(s) and cannot continue
> > + * with the conntrack module checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> > + *
> > + * Matches the state of a packet after it passed the connection
> > +tracking
> > + * examination. The state is a bit mask of one
> > +RTE_FLOW_CONNTRACK_FLAG*
> > + * or a reasonable combination of these bits.
> > + */
> > +struct rte_flow_item_conntrack {
> > + uint32_t flags;
> > +};
> > +
> > +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */ #ifndef
> > +__cplusplus static const struct rte_flow_item_conntrack
> > +rte_flow_item_conntrack_mask =
> > {
> > + .flags = 0xffffffff,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > @@ -2267,6 +2321,17 @@ enum rte_flow_action_type {
> > * See struct rte_flow_action_modify_field.
> > */
> > RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Enable tracking a TCP connection state.
> > + *
> > + * Send packet to HW connection tracking module for
> examination.
> > + *
> > + * See struct rte_flow_action_conntrack.
> > + */
> > + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -2859,6 +2924,136 @@ struct rte_flow_action_set_dscp {
> > */
> > struct rte_flow_shared_action;
> >
> > +/**
> > + * The state of a TCP connection.
> > + */
> > +enum rte_flow_conntrack_state {
> > + /**< SYN-ACK packet was seen. */
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< 3-way handshark was done. */
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< First FIN packet was received to close the connection. */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< Second FIN was received, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< Second FIN was ACKed, connection was closed. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > +};
> > +
> > +/**
> > + * The last passed TCP packet flags of a connection.
> > + */
> > +enum rte_flow_conntrack_tcp_last_index {
> > + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK
> > flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag.
> */ };
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * Configuration parameters for each direction of a TCP
> connection.
> > + */
> > +struct rte_flow_tcp_dir_param {
> > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> disable. */
> > + uint32_t close_initiated:1; /**< The FIN was sent by this
> direction. */
> > + /**< An ACK packet has been received by this side. */
> > + uint32_t last_ack_seen:1;
> > + /**< If set, indicates that there is unacked data of the
> connection. */
> > + uint32_t data_unacked:1;
> > + /**< Maximal value of sequence + payload length over sent
> > + * packets (next ACK from the opposite direction).
> > + */
> > + uint32_t sent_end;
> > + /**< Maximal value of (ACK + window size) over received packet
> +
> > length
> > + * over sent packet (maximal sequence could be sent).
> > + */
> > + uint32_t reply_end;
> > + /**< Maximal value of actual window size over sent packets. */
> > + uint32_t max_win;
> > + /**< Maximal value of ACK over sent packets. */
> > + uint32_t max_ack;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Configuration and initial state for the connection tracking
> module.
> > + * This structure could be used for both setting and query.
> > + */
> > +struct rte_flow_action_conntrack {
> > + uint16_t peer_port; /**< The peer port number, can be the same
> port.
> > */
> > + /**< Direction of this connection when creating a flow, the
> value only
> > + * affects the subsequent flows creation.
> > + */
> > + uint32_t is_original_dir:1;
> > + /**< Enable / disable the conntrack HW module. When disabled,
> the
> > + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> > + * In this state the HW will act as passthrough.
> > + * It only affects this conntrack object in the HW without any
> effect
> > + * to the other objects.
> > + */
> > + uint32_t enable:1;
> > + /**< At least one ack was seen, after the connection was
> established.
> > */
> > + uint32_t live_connection:1;
> > + /**< Enable selective ACK on this connection. */
> > + uint32_t selective_ack:1;
> > + /**< A challenge ack has passed. */
> > + uint32_t challenge_ack_passed:1;
> > + /**< 1: The last packet is seen that comes from the original
> direction.
> > + * 0: From the reply direction.
> > + */
> > + uint32_t last_direction:1;
> > + /**< No TCP check will be done except the state change. */
> > + uint32_t liberal_mode:1;
> > + /**< The current state of the connection. */
> > + enum rte_flow_conntrack_state state;
> > + /**< Scaling factor for maximal allowed ACK window. */
> > + uint8_t max_ack_window;
> > + /**< Maximal allowed number of retransmission times. */
> > + uint8_t retransmission_limit;
> > + /**< TCP parameters of the original direction. */
> > + struct rte_flow_tcp_dir_param original_dir;
> > + /**< TCP parameters of the reply direction. */
> > + struct rte_flow_tcp_dir_param reply_dir;
> > + /**< The window value of the last packet passed this conntrack.
> */
> > + uint16_t last_window;
> > + enum rte_flow_conntrack_tcp_last_index last_index;
> > + /**< The sequence of the last packet passed this conntrack. */
> > + uint32_t last_seq;
> > + /**< The acknowledgement of the last packet passed this
> conntrack. */
> > + uint32_t last_ack;
> > + /**< The total value ACK + payload length of the last packet
> passed
> > + * this conntrack.
> > + */
> > + uint32_t last_end;
> > +};
> > +
> > +/**
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Wrapper structure for the context update interface.
> > + * Ports cannot support updating, and the only valid solution is
> to
> > + * destroy the old context and create a new one instead.
> > + */
> > +struct rte_flow_modify_conntrack {
> > + /**< New connection tracking parameters to be updated. */
> > + struct rte_flow_action_conntrack new_ct;
> > + uint32_t direction:1; /**< The direction field will be updated.
> */
> > + /**< All the other fields except direction will be updated. */
> > + uint32_t state:1;
> > + uint32_t reserved:30; /**< Reserved bits for the future usage.
> */ };
> > +
> > /**
> > * Field IDs for MODIFY_FIELD action.
> > */
> > --
> > 2.30.0.windows.2
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack Bing Zhao
@ 2021-04-16 8:46 ` Ori Kam
2021-04-16 18:20 ` Bing Zhao
0 siblings, 1 reply; 45+ messages in thread
From: Ori Kam @ 2021-04-16 8:46 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Bing,
1. you are missing the documentation patch.
doc/guides/testpmd_app_ug/testpmd_funcs.rst
please make sure that you add examples at the and of fil.
you can see example in the integrity patch.
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, April 15, 2021 7:41 PM
> Subject: [PATCH v2 2/2] app/testpmd: add CLI for conntrack
>
> The command line for testing connection tracking is added. To create
> a conntrack object, 3 parts are needed.
> set conntrack com peer ...
> set conntrack orig scale ...
> set conntrack rply scale ...
> This will create a full conntrack action structure for the indirect
> action. After the indirect action handle of "conntrack" created, it
> could be used in the flow creation. Before updating, the same
> structure is also needed together with the update command
> "conntrack_update" to update the "dir" or "ctx".
>
> After the flow with conntrack action created, the packet should jump
> to the next flow for the result checking with conntrack item. The
> state is defined with bits and a valid combination could be
> supported.
>
Can you please add more detail examples?
Also what is the command to update and use the connection tracking action and item.
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> app/test-pmd/cmdline.c | 354 ++++++++++++++++++++++++++++++++++++
> app/test-pmd/cmdline_flow.c | 92 ++++++++++
> app/test-pmd/config.c | 65 ++++++-
> app/test-pmd/testpmd.h | 2 +
> 4 files changed, 512 insertions(+), 1 deletion(-)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index c28a3d2e5d..58ab7191d6 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -13618,6 +13618,358 @@ cmdline_parse_inst_t
> cmd_set_mplsoudp_decap_with_vlan = {
> },
> };
>
> +/** Set connection tracking object common details */
> +struct cmd_set_conntrack_common_result {
> + cmdline_fixed_string_t set;
> + cmdline_fixed_string_t conntrack;
> + cmdline_fixed_string_t common;
> + cmdline_fixed_string_t peer;
> + cmdline_fixed_string_t is_orig;
> + cmdline_fixed_string_t enable;
> + cmdline_fixed_string_t live;
> + cmdline_fixed_string_t sack;
> + cmdline_fixed_string_t cack;
> + cmdline_fixed_string_t last_dir;
> + cmdline_fixed_string_t liberal;
> + cmdline_fixed_string_t state;
> + cmdline_fixed_string_t max_ack_win;
> + cmdline_fixed_string_t retrans;
> + cmdline_fixed_string_t last_win;
> + cmdline_fixed_string_t last_seq;
> + cmdline_fixed_string_t last_ack;
> + cmdline_fixed_string_t last_end;
> + cmdline_fixed_string_t last_index;
> + uint8_t stat;
> + uint8_t factor;
> + uint16_t peer_port;
> + uint32_t is_original;
> + uint32_t en;
> + uint32_t is_live;
> + uint32_t s_ack;
> + uint32_t c_ack;
> + uint32_t ld;
> + uint32_t lb;
> + uint8_t re_num;
> + uint8_t li;
> + uint16_t lw;
> + uint32_t ls;
> + uint32_t la;
> + uint32_t le;
Why not use full names?
> +};
> +
> +cmdline_parse_token_string_t cmd_set_conntrack_set =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + set, "set");
> +cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + conntrack, "conntrack");
> +cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + common, "com");
> +cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + peer, "peer");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + peer_port, RTE_UINT16);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + is_orig, "is_orig");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + is_original, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + enable, "enable");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + en, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_live =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + live, "live");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + is_live, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + sack, "sack");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + s_ack, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + cack, "cack");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + c_ack, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_dir, "last_dir");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + ld, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + liberal, "liberal");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + lb, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_state =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + state, "state");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + stat, RTE_UINT8);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + max_ack_win, "max_ack_win");
> +cmdline_parse_token_num_t
> cmd_set_conntrack_common_max_ackwin_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + factor, RTE_UINT8);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + retrans, "r_lim");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + re_num, RTE_UINT8);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_win, "last_win");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + lw, RTE_UINT16);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_seq, "last_seq");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + ls, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_ack, "last_ack");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + la, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_end, "last_end");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + le, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_common_last_index =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_set_conntrack_common_result,
> + last_index, "last_index");
> +cmdline_parse_token_num_t cmd_set_conntrack_common_last_index_value
> =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> + li, RTE_UINT8);
> +
> +static void cmd_set_conntrack_common_parsed(void *parsed_result,
> + __rte_unused struct cmdline *cl,
> + __rte_unused void *data)
> +{
> + struct cmd_set_conntrack_common_result *res = parsed_result;
> +
> + /* No need to swap to big endian. */
> + conntrack_context.peer_port = res->peer_port;
> + conntrack_context.is_original_dir = res->is_original;
> + conntrack_context.enable = res->en;
> + conntrack_context.live_connection = res->is_live;
> + conntrack_context.selective_ack = res->s_ack;
> + conntrack_context.challenge_ack_passed = res->c_ack;
> + conntrack_context.last_direction = res->ld;
> + conntrack_context.liberal_mode = res->lb;
> + conntrack_context.state = (enum rte_flow_conntrack_state)res->stat;
> + conntrack_context.max_ack_window = res->factor;
> + conntrack_context.retransmission_limit = res->re_num;
> + conntrack_context.last_window = res->lw;
> + conntrack_context.last_index =
> + (enum rte_flow_conntrack_tcp_last_index)res->li;
> + conntrack_context.last_seq = res->ls;
> + conntrack_context.last_ack = res->la;
> + conntrack_context.last_end = res->le;
> +}
> +
> +cmdline_parse_inst_t cmd_set_conntrack_common = {
> + .f = cmd_set_conntrack_common_parsed,
> + .data = NULL,
> + .help_str = "set conntrack com peer <port_id> is_orig <dir> enable
> <en>"
> + " live <ack_seen> sack <en> cack <passed> last_dir <dir>"
> + " liberal <en> state <s> max_ack_win <factor> r_lim <num>"
> + " last_win <win> last_seq <seq> last_ack <ack> last_end
> <end>"
> + " last_index <flag>",
> + .tokens = {
> + (void *)&cmd_set_conntrack_set,
> + (void *)&cmd_set_conntrack_conntrack,
> + (void *)&cmd_set_conntrack_common_peer,
> + (void *)&cmd_set_conntrack_common_peer_value,
> + (void *)&cmd_set_conntrack_common_is_orig,
> + (void *)&cmd_set_conntrack_common_is_orig_value,
> + (void *)&cmd_set_conntrack_common_enable,
> + (void *)&cmd_set_conntrack_common_enable_value,
> + (void *)&cmd_set_conntrack_common_live,
> + (void *)&cmd_set_conntrack_common_live_value,
> + (void *)&cmd_set_conntrack_common_sack,
> + (void *)&cmd_set_conntrack_common_sack_value,
> + (void *)&cmd_set_conntrack_common_cack,
> + (void *)&cmd_set_conntrack_common_cack_value,
> + (void *)&cmd_set_conntrack_common_last_dir,
> + (void *)&cmd_set_conntrack_common_last_dir_value,
> + (void *)&cmd_set_conntrack_common_liberal,
> + (void *)&cmd_set_conntrack_common_liberal_value,
> + (void *)&cmd_set_conntrack_common_state,
> + (void *)&cmd_set_conntrack_common_state_value,
> + (void *)&cmd_set_conntrack_common_max_ackwin,
> + (void *)&cmd_set_conntrack_common_max_ackwin_value,
> + (void *)&cmd_set_conntrack_common_retrans,
> + (void *)&cmd_set_conntrack_common_retrans_value,
> + (void *)&cmd_set_conntrack_common_last_win,
> + (void *)&cmd_set_conntrack_common_last_win_value,
> + (void *)&cmd_set_conntrack_common_last_seq,
> + (void *)&cmd_set_conntrack_common_last_seq_value,
> + (void *)&cmd_set_conntrack_common_last_ack,
> + (void *)&cmd_set_conntrack_common_last_ack_value,
> + (void *)&cmd_set_conntrack_common_last_end,
> + (void *)&cmd_set_conntrack_common_last_end_value,
> + (void *)&cmd_set_conntrack_common_last_index,
> + (void *)&cmd_set_conntrack_common_last_index_value,
> + NULL,
> + },
> +};
> +
> +/** Set connection tracking object both directions' details */
> +struct cmd_set_conntrack_dir_result {
> + cmdline_fixed_string_t set;
> + cmdline_fixed_string_t conntrack;
> + cmdline_fixed_string_t dir;
> + cmdline_fixed_string_t scale;
> + cmdline_fixed_string_t fin;
> + cmdline_fixed_string_t ack_seen;
> + cmdline_fixed_string_t unack;
> + cmdline_fixed_string_t sent_end;
> + cmdline_fixed_string_t reply_end;
> + cmdline_fixed_string_t max_win;
> + cmdline_fixed_string_t max_ack;
> + uint32_t factor;
> + uint32_t f;
> + uint32_t as;
> + uint32_t un;
> + uint32_t se;
> + uint32_t re;
> + uint32_t mw;
> + uint32_t ma;
> +};
> +
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + set, "set");
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + conntrack, "conntrack");
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + dir, "orig#rply");
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + scale, "scale");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + factor, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + fin, "fin");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + f, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + ack_seen, "acked");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + as, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + unack, "unack_data");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + un, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + sent_end, "sent_end");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + se, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + reply_end, "reply_end");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + re, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + max_win, "max_win");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + mw, RTE_UINT32);
> +cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + max_ack, "max_ack");
> +cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> + ma, RTE_UINT32);
> +
> +static void cmd_set_conntrack_dir_parsed(void *parsed_result,
> + __rte_unused struct cmdline *cl,
> + __rte_unused void *data)
> +{
> + struct cmd_set_conntrack_dir_result *res = parsed_result;
> + struct rte_flow_tcp_dir_param *dir = NULL;
> +
> + if (strcmp(res->dir, "orig") == 0)
> + dir = &conntrack_context.original_dir;
> + else if (strcmp(res->dir, "rply") == 0)
> + dir = &conntrack_context.reply_dir;
> + else
> + return;
> + dir->scale = res->factor;
> + dir->close_initiated = res->f;
> + dir->last_ack_seen = res->as;
> + dir->data_unacked = res->un;
> + dir->sent_end = res->se;
> + dir->reply_end = res->re;
> + dir->max_ack = res->ma;
> + dir->max_win = res->mw;
> +}
> +
> +cmdline_parse_inst_t cmd_set_conntrack_dir = {
> + .f = cmd_set_conntrack_dir_parsed,
> + .data = NULL,
> + .help_str = "set conntrack orig|rply scale <factor> fin <sent>"
> + " acked <seen> unack_data <unack> sent_end <sent>"
> + " reply_end <reply> max_win <win> max_ack <ack>",
> + .tokens = {
> + (void *)&cmd_set_conntrack_set,
> + (void *)&cmd_set_conntrack_conntrack,
> + (void *)&cmd_set_conntrack_dir_dir,
> + (void *)&cmd_set_conntrack_dir_scale,
> + (void *)&cmd_set_conntrack_dir_scale_value,
> + (void *)&cmd_set_conntrack_dir_fin,
> + (void *)&cmd_set_conntrack_dir_fin_value,
> + (void *)&cmd_set_conntrack_dir_ack,
> + (void *)&cmd_set_conntrack_dir_ack_value,
> + (void *)&cmd_set_conntrack_dir_unack_data,
> + (void *)&cmd_set_conntrack_dir_unack_data_value,
> + (void *)&cmd_set_conntrack_dir_sent_end,
> + (void *)&cmd_set_conntrack_dir_sent_end_value,
> + (void *)&cmd_set_conntrack_dir_reply_end,
> + (void *)&cmd_set_conntrack_dir_reply_end_value,
> + (void *)&cmd_set_conntrack_dir_max_win,
> + (void *)&cmd_set_conntrack_dir_max_win_value,
> + (void *)&cmd_set_conntrack_dir_max_ack,
> + (void *)&cmd_set_conntrack_dir_max_ack_value,
> + NULL,
> + },
> +};
> +
> /* Strict link priority scheduling mode setting */
> static void
> cmd_strict_link_prio_parsed(
> @@ -17117,6 +17469,8 @@ cmdline_parse_ctx_t main_ctx[] = {
> (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
> (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
> (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
> + (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
> + (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
> (cmdline_parse_inst_t *)&cmd_ddp_add,
> (cmdline_parse_inst_t *)&cmd_ddp_del,
> (cmdline_parse_inst_t *)&cmd_ddp_get_list,
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index d83dec942a..fc5e31be5e 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -289,6 +289,7 @@ enum index {
> ITEM_GENEVE_OPT_TYPE,
> ITEM_GENEVE_OPT_LENGTH,
> ITEM_GENEVE_OPT_DATA,
> + ITEM_CONNTRACK,
>
> /* Validate/create actions. */
> ACTIONS,
> @@ -427,6 +428,10 @@ enum index {
> ACTION_MODIFY_FIELD_SRC_OFFSET,
> ACTION_MODIFY_FIELD_SRC_VALUE,
> ACTION_MODIFY_FIELD_WIDTH,
> + ACTION_CONNTRACK,
> + ACTION_CONNTRACK_UPDATE,
> + ACTION_CONNTRACK_UPDATE_DIR,
> + ACTION_CONNTRACK_UPDATE_CTX,
> };
>
> /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -565,6 +570,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
>
> struct mplsoudp_decap_conf mplsoudp_decap_conf;
>
> +struct rte_flow_action_conntrack conntrack_context;
> +
> #define ACTION_SAMPLE_ACTIONS_NUM 10
> #define RAW_SAMPLE_CONFS_MAX_NUM 8
> /** Storage for struct rte_flow_action_sample including external data. */
> @@ -956,6 +963,7 @@ static const enum index next_item[] = {
> ITEM_PFCP,
> ITEM_ECPRI,
> ITEM_GENEVE_OPT,
> + ITEM_CONNTRACK,
> END_SET,
> ZERO,
> };
> @@ -1370,6 +1378,8 @@ static const enum index next_action[] = {
> ACTION_SAMPLE,
> ACTION_INDIRECT,
> ACTION_MODIFY_FIELD,
> + ACTION_CONNTRACK,
> + ACTION_CONNTRACK_UPDATE,
> ZERO,
> };
>
> @@ -1638,6 +1648,13 @@ static const enum index action_modify_field_src[] =
> {
> ZERO,
> };
>
> +static const enum index action_update_conntrack[] = {
> + ACTION_CONNTRACK_UPDATE_DIR,
> + ACTION_CONNTRACK_UPDATE_CTX,
> + ACTION_NEXT,
> + ZERO,
> +};
> +
> static int parse_set_raw_encap_decap(struct context *, const struct token *,
> const char *, unsigned int,
> void *, unsigned int);
> @@ -1728,6 +1745,10 @@ static int
> parse_vc_modify_field_id(struct context *ctx, const struct token *token,
> const char *str, unsigned int len, void *buf,
> unsigned int size);
> +static int
> +parse_vc_action_conntrack_update(struct context *ctx, const struct token
> *token,
> + const char *str, unsigned int len, void *buf,
> + unsigned int size);
> static int parse_destroy(struct context *, const struct token *,
> const char *, unsigned int,
> void *, unsigned int);
> @@ -3373,6 +3394,13 @@ static const struct token token_list[] = {
> (sizeof(struct rte_flow_item_geneve_opt),
> ITEM_GENEVE_OPT_DATA_SIZE)),
> },
> + [ITEM_CONNTRACK] = {
> + .name = "conntrack",
> + .help = "conntrack state",
> + .next = NEXT(NEXT_ENTRY(ITEM_NEXT),
> NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack,
> flags)),
> + },
> /* Validate/create actions. */
> [ACTIONS] = {
> .name = "actions",
> @@ -4471,6 +4499,34 @@ static const struct token token_list[] = {
> .call = parse_vc_action_sample_index,
> .comp = comp_set_sample_index,
> },
> + [ACTION_CONNTRACK] = {
> + .name = "conntrack",
> + .help = "create a conntrack object",
> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> + .priv = PRIV_ACTION(CONNTRACK,
> + sizeof(struct rte_flow_action_conntrack)),
> + .call = parse_vc,
> + },
> + [ACTION_CONNTRACK_UPDATE] = {
> + .name = "conntrack_update",
> + .help = "update a conntrack object",
> + .next = NEXT(action_update_conntrack),
> + .priv = PRIV_ACTION(CONNTRACK,
> + sizeof(struct rte_flow_modify_conntrack)),
> + .call = parse_vc,
> + },
> + [ACTION_CONNTRACK_UPDATE_DIR] = {
> + .name = "dir",
> + .help = "update a conntrack object direction",
> + .next = NEXT(action_update_conntrack),
> + .call = parse_vc_action_conntrack_update,
> + },
> + [ACTION_CONNTRACK_UPDATE_CTX] = {
> + .name = "ctx",
> + .help = "update a conntrack object context",
> + .next = NEXT(action_update_conntrack),
> + .call = parse_vc_action_conntrack_update,
> + },
> /* Indirect action destroy arguments. */
> [INDIRECT_ACTION_DESTROY_ID] = {
> .name = "action_id",
> @@ -6277,6 +6333,42 @@ parse_vc_modify_field_id(struct context *ctx, const
> struct token *token,
> return len;
> }
>
> +/** Parse the conntrack update, not a rte_flow_action. */
> +static int
> +parse_vc_action_conntrack_update(struct context *ctx, const struct token
> *token,
> + const char *str, unsigned int len, void *buf,
> + unsigned int size)
> +{
> + struct buffer *out = buf;
> + struct rte_flow_modify_conntrack *ct_modify = NULL;
> +
> + (void)size;
> + if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
> + ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
> + return -1;
> + /* Token name must match. */
> + if (parse_default(ctx, token, str, len, NULL, 0) < 0)
> + return -1;
> + ct_modify = (struct rte_flow_modify_conntrack *)out->args.vc.data;
> + /* Nothing else to do if there is no buffer. */
> + if (!out)
> + return len;
> + if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
> + ct_modify->new_ct.is_original_dir =
> + conntrack_context.is_original_dir;
> + ct_modify->direction = 1;
> + } else {
> + uint32_t old_dir;
> +
> + old_dir = ct_modify->new_ct.is_original_dir;
> + memcpy(&ct_modify->new_ct, &conntrack_context,
> + sizeof(conntrack_context));
> + ct_modify->new_ct.is_original_dir = old_dir;
> + ct_modify->state = 1;
> + }
> + return len;
> +}
> +
> /** Parse tokens for destroy command. */
> static int
> parse_destroy(struct context *ctx, const struct token *token,
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 1eec0612a4..06143a7501 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1483,6 +1483,11 @@ port_action_handle_create(portid_t port_id,
> uint32_t id,
>
> pia->age_type =
> ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
> age->context = &pia->age_type;
> + } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
> + struct rte_flow_action_conntrack *ct =
> + (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf);
> +
> + memcpy(ct, &conntrack_context, sizeof(*ct));
> }
> /* Poisoning to make sure PMDs update it in case of error. */
> memset(&error, 0x22, sizeof(error));
> @@ -1564,11 +1569,24 @@ port_action_handle_update(portid_t port_id,
> uint32_t id,
> {
> struct rte_flow_error error;
> struct rte_flow_action_handle *action_handle;
> + struct port_indirect_action *pia;
> + const void *update;
>
> action_handle = port_action_handle_get_by_id(port_id, id);
> if (!action_handle)
> return -EINVAL;
> - if (rte_flow_action_handle_update(port_id, action_handle, action,
> + pia = action_get_by_id(port_id, id);
> + if (!pia)
> + return -EINVAL;
> + switch (pia->type) {
> + case RTE_FLOW_ACTION_TYPE_CONNTRACK:
> + update = action->conf;
> + break;
> + default:
> + update = action;
> + break;
> + }
> + if (rte_flow_action_handle_update(port_id, action_handle, update,
> &error)) {
> return port_flow_complain(&error);
> }
> @@ -1621,6 +1639,51 @@ port_action_handle_query(portid_t port_id,
> uint32_t id)
> }
> data = NULL;
> break;
> + case RTE_FLOW_ACTION_TYPE_CONNTRACK:
> + if (!ret) {
> + struct rte_flow_action_conntrack *ct = data;
> +
> + printf("Conntrack Context:\n"
> + " Peer: %u, Flow dir: %s, Enable: %u\n"
> + " Live: %u, SACK: %u, CACK: %u\n"
> + " Packet dir: %s, Liberal: %u, State: %u\n"
> + " Factor: %u, Retrans: %u, TCP flags: %u\n"
> + " Last Seq: %u, Last ACK: %u\n"
> + " Last Win: %u, Last End: %u\n",
> + ct->peer_port,
> + ct->is_original_dir ? "Original" : "Reply",
> + ct->enable, ct->live_connection,
> + ct->selective_ack, ct->challenge_ack_passed,
> + ct->last_direction ? "Original" : "Reply",
> + ct->liberal_mode, ct->state,
> + ct->max_ack_window, ct->retransmission_limit,
> + ct->last_index, ct->last_seq, ct->last_ack,
> + ct->last_window, ct->last_end);
> + printf(" Original Dir:\n"
> + " scale: %u, fin: %u, ack seen: %u\n"
> + " unacked data: %u\n Sent end: %u,"
> + " Reply end: %u, Max win: %u, Max ACK: %u\n",
> + ct->original_dir.scale,
> + ct->original_dir.close_initiated,
> + ct->original_dir.last_ack_seen,
> + ct->original_dir.data_unacked,
> + ct->original_dir.sent_end,
> + ct->original_dir.reply_end,
> + ct->original_dir.max_win,
> + ct->original_dir.max_ack);
> + printf(" Reply Dir:\n"
> + " scale: %u, fin: %u, ack seen: %u\n"
> + " unacked data: %u\n Sent end: %u,"
> + " Reply end: %u, Max win: %u, Max ACK: %u\n",
> + ct->reply_dir.scale,
> + ct->reply_dir.close_initiated,
> + ct->reply_dir.last_ack_seen,
> + ct->reply_dir.data_unacked,
> + ct->reply_dir.sent_end, ct->reply_dir.reply_end,
> + ct->reply_dir.max_win, ct->reply_dir.max_ack);
> + }
> + data = NULL;
> + break;
> default:
> printf("Indirect action %u (type: %d) on port %u doesn't"
> " support query\n", id, pia->type, port_id);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index d1eaaadb17..d7528f9cb5 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf
> mplsoudp_decap_conf;
>
> extern enum rte_eth_rx_mq_mode rx_mq_mode;
>
> +extern struct rte_flow_action_conntrack conntrack_context;
> +
> static inline unsigned int
> lcore_num(void)
> {
> --
> 2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 1/2] " Bing Zhao
@ 2021-04-16 10:49 ` Thomas Monjalon
2021-04-16 18:18 ` Bing Zhao
2021-04-16 12:41 ` Ori Kam
1 sibling, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-16 10:49 UTC (permalink / raw)
To: Bing Zhao
Cc: orika, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde,
jerinj, humin29, rosen.xu, hemant.agrawal
15/04/2021 18:41, Bing Zhao:
> This commit introduced the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow the application should add
> the conntrack action and in the following flow(s) the application
> should use the conntrack item to match on the result.
You probably mean "flow rule", not "traffic flow".
Please make it clear to avoid confusion.
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, information from packets of both directions
> are required.
>
> The conntrack action should be created on one port and supply the
> peer port as a parameter to the action. After context creating, it
> could only be used between the ports (dual-port mode) or a single
> port. The application should modify the action via the API
> "action_handle_update" only when before using it to create a flow
> with opposite direction. This will help the driver to recognize the
> direction of the flow to be created, especially in single port mode.
> The traffic from both directions will go through the same port if
> the application works as an "forwarding engine" but not a end point.
> There is no need to call the update interface if the subsequent flows
> have nothing to be changed.
I am not sure this is a feature description for the commit log
or an usage explanation for the doc.
In any case, please distinguish "ethdev port" and "TCP port"
to avoid confusion.
> Query will be supported via action_ctx_query interface, about the
> current packets information and connection status. Tha fields
> query capabilities depends on the HW.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to take full advantage of the
What do you mean by "full advantage"?
It is counter-intuitive to re-inject for offloading.
Does it improve the performance?
> conntrack. Only the valid packets should pass the conntrack, packets
> with invalid TCP information, like out of window, or with invalid
> header, like malformed, should not pass.
>
> Naming and definition:
You mean naming is inspired from Linux?
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_conntrack_tcp.h
> https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_tcp.c
>
> Other reference:
> https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
[...]
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * See struct rte_flow_item_conntrack.
Please use @see for hyperlink in doxygen.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
[...]
> +/**
> + * The packet is with valid state after conntrack checking.
"is with valid state" looks strange.
I propose "The packet is valid after conntrack checking."
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
Please use RTE_BIT32().
> +/**
> + * The state of the connection was changed.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> +/**
> + * Error is detected on this packet for this connection and
> + * an invalid state is set.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
"INVAL" is strange. Can we add the missing 2 characters?
RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVALID
On a related note, do we really need the word FLAG?
And it is conflicting with the prefix in
enum rte_flow_conntrack_tcp_last_index
I think RTE_FLOW_CONNTRACK_PKT_STATE_ is a good prefix, long enough.
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
This one does not have PKT in its name.
And it is limiting to HW, while the driver could implement conntrack in SW.
I propose RTE_FLOW_CONNTRACK_PKT_DISABLED
> +/**
> + * The packet contains some bad field(s) and cannot continue
> + * with the conntrack module checking.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
s/bit mask/bitmap/ ?
RTE_FLOW_CONNTRACK_PKT_STATE_*
otherwise it is messed with rte_flow_conntrack_tcp_last_index
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
[...]
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * Send packet to HW connection tracking module for examination.
Not necessarily HW.
No packet is sent.
I think you can remove this sentence completely.
> + *
> + * See struct rte_flow_action_conntrack.
@see
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2875,6 +2940,136 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_action_handle;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< 3-way handshark was done. */
s/handshark/handshake/
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< Second FIN was received, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< Second FIN was ACKed, connection was closed. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_tcp_last_index {
> + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag. */
> +};
Please use RTE_BIT32().
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
> + uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
> + /**< An ACK packet has been received by this side. */
Move all comments on their own line before the struct member.
Comment should then start with /**
> + uint32_t last_ack_seen:1;
> + /**< If set, indicates that there is unacked data of the connection. */
not sure what means "unacked data of the connection"
> + uint32_t data_unacked:1;
> + /**< Maximal value of sequence + payload length over sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t sent_end;
> + /**< Maximal value of (ACK + window size) over received packet + length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t reply_end;
> + /**< Maximal value of actual window size over sent packets. */
> + uint32_t max_win;
> + /**< Maximal value of ACK over sent packets. */
> + uint32_t max_ack;
Not sure about the word "over" in above definitions.
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
> + */
> +struct rte_flow_action_conntrack {
> + uint16_t peer_port; /**< The peer port number, can be the same port. */
> + /**< Direction of this connection when creating a flow, the value only
> + * affects the subsequent flows creation.
> + */
As for rte_flow_tcp_dir_param, better to move comments before,
on their own line.
> + uint32_t is_original_dir:1;
> + /**< Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + * It only affects this conntrack object in the HW without any effect
> + * to the other objects.
> + */
> + uint32_t enable:1;
> + /**< At least one ack was seen, after the connection was established. */
> + uint32_t live_connection:1;
> + /**< Enable selective ACK on this connection. */
> + uint32_t selective_ack:1;
> + /**< A challenge ack has passed. */
> + uint32_t challenge_ack_passed:1;
> + /**< 1: The last packet is seen that comes from the original direction.
> + * 0: From the reply direction.
> + */
> + uint32_t last_direction:1;
> + /**< No TCP check will be done except the state change. */
> + uint32_t liberal_mode:1;
> + /**< The current state of the connection. */
> + enum rte_flow_conntrack_state state;
> + /**< Scaling factor for maximal allowed ACK window. */
> + uint8_t max_ack_window;
> + /**< Maximal allowed number of retransmission times. */
> + uint8_t retransmission_limit;
> + /**< TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /**< TCP parameters of the reply direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /**< The window value of the last packet passed this conntrack. */
> + uint16_t last_window;
> + enum rte_flow_conntrack_tcp_last_index last_index;
> + /**< The sequence of the last packet passed this conntrack. */
> + uint32_t last_seq;
> + /**< The acknowledgement of the last packet passed this conntrack. */
> + uint32_t last_ack;
> + /**< The total value ACK + payload length of the last packet passed
> + * this conntrack.
> + */
> + uint32_t last_end;
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
> +struct rte_flow_modify_conntrack {
> + /**< New connection tracking parameters to be updated. */
> + struct rte_flow_action_conntrack new_ct;
> + uint32_t direction:1; /**< The direction field will be updated. */
> + /**< All the other fields except direction will be updated. */
> + uint32_t state:1;
> + uint32_t reserved:30; /**< Reserved bits for the future usage. */
> +};
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 1/2] " Bing Zhao
2021-04-16 10:49 ` Thomas Monjalon
@ 2021-04-16 12:41 ` Ori Kam
2021-04-16 18:05 ` Bing Zhao
1 sibling, 1 reply; 45+ messages in thread
From: Ori Kam @ 2021-04-16 12:41 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Bing,
One more thought, PSB
Best,
Ori
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Thursday, April 15, 2021 7:41 PM
> To: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; ajit.khaparde@broadcom.com
> Subject: [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
>
> This commit introduced the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow the application should add
> the conntrack action and in the following flow(s) the application
> should use the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, information from packets of both directions
> are required.
>
> The conntrack action should be created on one port and supply the
> peer port as a parameter to the action. After context creating, it
> could only be used between the ports (dual-port mode) or a single
> port. The application should modify the action via the API
> "action_handle_update" only when before using it to create a flow
> with opposite direction. This will help the driver to recognize the
> direction of the flow to be created, especially in single port mode.
> The traffic from both directions will go through the same port if
> the application works as an "forwarding engine" but not a end point.
> There is no need to call the update interface if the subsequent flows
> have nothing to be changed.
>
> Query will be supported via action_ctx_query interface, about the
> current packets information and connection status. Tha fields
> query capabilities depends on the HW.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to take full advantage of the
> conntrack. Only the valid packets should pass the conntrack, packets
> with invalid TCP information, like out of window, or with invalid
> header, like malformed, should not pass.
>
> Naming and definition:
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_co
> nntrack_tcp.h
> https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_
> tcp.c
>
> Other reference:
> https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.c | 2 +
> lib/librte_ethdev/rte_flow.h | 195 +++++++++++++++++++++++++++++++++++
> 2 files changed, 197 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index 27a161559d..0af601d508 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_item[] = {
> MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
> MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
> MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct
> rte_flow_item_geneve_opt)),
> + MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
> };
>
> /** Generate flow_action[] entry. */
> @@ -186,6 +187,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> * indirect action handle.
> */
> MK_FLOW_ACTION(INDIRECT, 0),
> + MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> };
>
> int
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 91ae25b1da..024d1a2026 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * See struct rte_flow_item_conntrack.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
>
> /**
> @@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +/**
> + * The packet is with valid state after conntrack checking.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
> +/**
> + * The state of the connection was changed.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> +/**
> + * Error is detected on this packet for this connection and
> + * an invalid state is set.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
> +/**
> + * The packet contains some bad field(s) and cannot continue
> + * with the conntrack module checking.
> + */
> +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bit mask of one RTE_FLOW_CONNTRACK_FLAG*
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask =
> {
> + .flags = 0xffffffff,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> @@ -2277,6 +2331,17 @@ enum rte_flow_action_type {
> * same port or across different ports.
> */
> RTE_FLOW_ACTION_TYPE_INDIRECT,
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * Send packet to HW connection tracking module for examination.
> + *
> + * See struct rte_flow_action_conntrack.
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2875,6 +2940,136 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_action_handle;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< 3-way handshark was done. */
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< Second FIN was received, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< Second FIN was ACKed, connection was closed. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_tcp_last_index {
> + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK
> flag. */
> + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + uint32_t scale:4; /**< TCP window scaling factor, 0xF to disable. */
> + uint32_t close_initiated:1; /**< The FIN was sent by this direction. */
> + /**< An ACK packet has been received by this side. */
> + uint32_t last_ack_seen:1;
> + /**< If set, indicates that there is unacked data of the connection. */
> + uint32_t data_unacked:1;
> + /**< Maximal value of sequence + payload length over sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t sent_end;
> + /**< Maximal value of (ACK + window size) over received packet +
> length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t reply_end;
This comment is for all members that are part of the packet,
Do you think it should be in network order?
I can see the advantage in both ways nice I assume the app needs this data
in host byte-order but since in most other cases we use network byte-order to
set values that are coming from the packet itself maybe it is better to use network
byte-order (will also save the conversion)
> + /**< Maximal value of actual window size over sent packets. */
> + uint32_t max_win;
> + /**< Maximal value of ACK over sent packets. */
> + uint32_t max_ack;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
> + */
> +struct rte_flow_action_conntrack {
> + uint16_t peer_port; /**< The peer port number, can be the same port.
> */
> + /**< Direction of this connection when creating a flow, the value only
> + * affects the subsequent flows creation.
> + */
> + uint32_t is_original_dir:1;
> + /**< Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + * It only affects this conntrack object in the HW without any effect
> + * to the other objects.
> + */
> + uint32_t enable:1;
> + /**< At least one ack was seen, after the connection was established.
> */
> + uint32_t live_connection:1;
> + /**< Enable selective ACK on this connection. */
> + uint32_t selective_ack:1;
> + /**< A challenge ack has passed. */
> + uint32_t challenge_ack_passed:1;
> + /**< 1: The last packet is seen that comes from the original direction.
> + * 0: From the reply direction.
> + */
> + uint32_t last_direction:1;
> + /**< No TCP check will be done except the state change. */
> + uint32_t liberal_mode:1;
> + /**< The current state of the connection. */
> + enum rte_flow_conntrack_state state;
> + /**< Scaling factor for maximal allowed ACK window. */
> + uint8_t max_ack_window;
> + /**< Maximal allowed number of retransmission times. */
> + uint8_t retransmission_limit;
> + /**< TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /**< TCP parameters of the reply direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /**< The window value of the last packet passed this conntrack. */
> + uint16_t last_window;
> + enum rte_flow_conntrack_tcp_last_index last_index;
> + /**< The sequence of the last packet passed this conntrack. */
> + uint32_t last_seq;
> + /**< The acknowledgement of the last packet passed this conntrack. */
> + uint32_t last_ack;
> + /**< The total value ACK + payload length of the last packet passed
> + * this conntrack.
> + */
> + uint32_t last_end;
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
> +struct rte_flow_modify_conntrack {
> + /**< New connection tracking parameters to be updated. */
> + struct rte_flow_action_conntrack new_ct;
> + uint32_t direction:1; /**< The direction field will be updated. */
> + /**< All the other fields except direction will be updated. */
> + uint32_t state:1;
> + uint32_t reserved:30; /**< Reserved bits for the future usage. */
> +};
> +
> /**
> * Field IDs for MODIFY_FIELD action.
> */
> --
> 2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
2021-04-15 16:24 ` Ori Kam
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 0/2] " Bing Zhao
@ 2021-04-16 17:54 ` Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
` (2 more replies)
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item Bing Zhao
4 siblings, 3 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 17:54 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
Depends-on: series-16451 ("Change shared action API to action handle API")
This patch set includes the conntrack action and item definitions as
well as the testpmd CLI proposal.
Documents of release notes and guides are also updated.
---
v2: add testpmd CLI proposal
v3: add doc update
---
Bing Zhao (3):
ethdev: introduce conntrack flow action and item
app/testpmd: add CLI for conntrack
doc: update for conntrack
app/test-pmd/cmdline.c | 354 ++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 +++++
app/test-pmd/config.c | 65 +++-
app/test-pmd/testpmd.h | 2 +
doc/guides/prog_guide/rte_flow.rst | 113 +++++++
doc/guides/rel_notes/release_21_05.rst | 4 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 207 ++++++++++++
9 files changed, 873 insertions(+), 1 deletion(-)
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-04-16 17:54 ` Bing Zhao
2021-04-16 18:30 ` Ajit Khaparde
2021-04-19 14:06 ` Thomas Monjalon
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 3/3] doc: update " Bing Zhao
2 siblings, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 17:54 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack conntrack for the opposite
direction. This will help the driver to recognize the direction of
the flow to be created, especially in the single-port mode, in which
case the traffic from both directions will go through the same
ethdev port if the application works as an "forwarding engine" but
not an end point. There is no need to call the update interface if
the subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 207 +++++++++++++++++++++++++++++++++++
2 files changed, 209 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 0d2610b7c4..c7c7108933 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+ MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
};
/** Generate flow_action[] entry. */
@@ -186,6 +187,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
* indirect action handle.
*/
MK_FLOW_ACTION(INDIRECT, 0),
+ MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
};
int
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 324d00abdc..c9d7bdfa57 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * @see struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is valid after conntrack checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
+/**
+ * The state of the connection is changed.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
+/**
+ * Error is detected on this packet for this connection and
+ * an invalid state is set.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
+/**
+ * The packet contains some bad field(s) and cannot continue
+ * with the conntrack module checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bitmap of one RTE_FLOW_CONNTRACK_PKT_STATE*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2277,6 +2331,15 @@ enum rte_flow_action_type {
* same port or across different ports.
*/
RTE_FLOW_ACTION_TYPE_INDIRECT,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * @see struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2875,6 +2938,150 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_action_handle;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ /**< SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /**< 3-way handshake was done. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /**< First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /**< First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /**< Second FIN was received, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /**< Second FIN was ACKed, connection was closed. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_tcp_last_index {
+ RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYN = RTE_BIT32(0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYNACK = RTE_BIT32(1), /**< With SYNACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_FIN = RTE_BIT32(2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_ACK = RTE_BIT32(3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_RST = RTE_BIT32(4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ */
+struct rte_flow_tcp_dir_param {
+ /** TCP window scaling factor, 0xF to disable. */
+ uint32_t scale:4;
+ /** The FIN was sent by this direction. */
+ uint32_t close_initiated:1;
+ /** An ACK packet has been received by this side. */
+ uint32_t last_ack_seen:1;
+ /**
+ * If set, it indicates that there is unacknowledged data for the
+ * packets sent from this direction.
+ */
+ uint32_t data_unacked:1;
+ /**
+ * Maximal value of sequence + payload length in sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t sent_end;
+ /**
+ * Maximal value of (ACK + window size) in received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t reply_end;
+ /** Maximal value of actual window size in sent packets. */
+ uint32_t max_win;
+ /** Maximal value of ACK in sent packets. */
+ uint32_t max_ack;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ */
+struct rte_flow_action_conntrack {
+ /** The peer port number, can be the same port. */
+ uint16_t peer_port;
+ /**
+ * Direction of this connection when creating a flow, the value
+ * only affects the subsequent flows creation.
+ */
+ uint32_t is_original_dir:1;
+ /**
+ * Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ * It only affects this conntrack object in the HW without any effect
+ * to the other objects.
+ */
+ uint32_t enable:1;
+ /** At least one ack was seen after the connection was established. */
+ uint32_t live_connection:1;
+ /** Enable selective ACK on this connection. */
+ uint32_t selective_ack:1;
+ /** A challenge ack has passed. */
+ uint32_t challenge_ack_passed:1;
+ /**
+ * 1: The last packet is seen from the original direction.
+ * 0: The last packet is seen from the reply direction.
+ */
+ uint32_t last_direction:1;
+ /** No TCP check will be done except the state change. */
+ uint32_t liberal_mode:1;
+ /**<The current state of this connection. */
+ enum rte_flow_conntrack_state state;
+ /** Scaling factor for maximal allowed ACK window. */
+ uint8_t max_ack_window;
+ /** Maximal allowed number of retransmission times. */
+ uint8_t retransmission_limit;
+ /** TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /** TCP parameters of the reply direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /** The window value of the last packet passed this conntrack. */
+ uint16_t last_window;
+ enum rte_flow_conntrack_tcp_last_index last_index;
+ /** The sequence of the last packet passed this conntrack. */
+ uint32_t last_seq;
+ /** The acknowledgement of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**
+ * The total value ACK + payload length of the last packet
+ * passed this conntrack.
+ */
+ uint32_t last_end;
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ /** New connection tracking parameters to be updated. */
+ struct rte_flow_action_conntrack new_ct;
+ /** The direction field will be updated. */
+ uint32_t direction:1;
+ /** All the other fields except direction will be updated. */
+ uint32_t state:1;
+ /** Reserved bits for the future usage. */
+ uint32_t reserved:30;
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] app/testpmd: add CLI for conntrack
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
@ 2021-04-16 17:54 ` Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 3/3] doc: update " Bing Zhao
2 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 17:54 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
app/test-pmd/cmdline.c | 354 ++++++++++++++++++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 ++++++++++
app/test-pmd/config.c | 65 ++++++-
app/test-pmd/testpmd.h | 2 +
4 files changed, 512 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4d9e038ce8..a318544fc6 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13621,6 +13621,358 @@ cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
},
};
+/** Set connection tracking object common details */
+struct cmd_set_conntrack_common_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t common;
+ cmdline_fixed_string_t peer;
+ cmdline_fixed_string_t is_orig;
+ cmdline_fixed_string_t enable;
+ cmdline_fixed_string_t live;
+ cmdline_fixed_string_t sack;
+ cmdline_fixed_string_t cack;
+ cmdline_fixed_string_t last_dir;
+ cmdline_fixed_string_t liberal;
+ cmdline_fixed_string_t state;
+ cmdline_fixed_string_t max_ack_win;
+ cmdline_fixed_string_t retrans;
+ cmdline_fixed_string_t last_win;
+ cmdline_fixed_string_t last_seq;
+ cmdline_fixed_string_t last_ack;
+ cmdline_fixed_string_t last_end;
+ cmdline_fixed_string_t last_index;
+ uint8_t stat;
+ uint8_t factor;
+ uint16_t peer_port;
+ uint32_t is_original;
+ uint32_t en;
+ uint32_t is_live;
+ uint32_t s_ack;
+ uint32_t c_ack;
+ uint32_t ld;
+ uint32_t lb;
+ uint8_t re_num;
+ uint8_t li;
+ uint16_t lw;
+ uint32_t ls;
+ uint32_t la;
+ uint32_t le;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ common, "com");
+cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer, "peer");
+cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer_port, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_orig, "is_orig");
+cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_original, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ enable, "enable");
+cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ en, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_live =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ live, "live");
+cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_live, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ sack, "sack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ s_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ cack, "cack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ c_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_dir, "last_dir");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ld, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ liberal, "liberal");
+cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lb, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_state =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ state, "state");
+cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ stat, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ max_ack_win, "max_ack_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_max_ackwin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ factor, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ retrans, "r_lim");
+cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ re_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_win, "last_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lw, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_seq, "last_seq");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ls, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_ack, "last_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ la, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_end, "last_end");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ le, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_index =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_index, "last_index");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_index_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ li, RTE_UINT8);
+
+static void cmd_set_conntrack_common_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_common_result *res = parsed_result;
+
+ /* No need to swap to big endian. */
+ conntrack_context.peer_port = res->peer_port;
+ conntrack_context.is_original_dir = res->is_original;
+ conntrack_context.enable = res->en;
+ conntrack_context.live_connection = res->is_live;
+ conntrack_context.selective_ack = res->s_ack;
+ conntrack_context.challenge_ack_passed = res->c_ack;
+ conntrack_context.last_direction = res->ld;
+ conntrack_context.liberal_mode = res->lb;
+ conntrack_context.state = (enum rte_flow_conntrack_state)res->stat;
+ conntrack_context.max_ack_window = res->factor;
+ conntrack_context.retransmission_limit = res->re_num;
+ conntrack_context.last_window = res->lw;
+ conntrack_context.last_index =
+ (enum rte_flow_conntrack_tcp_last_index)res->li;
+ conntrack_context.last_seq = res->ls;
+ conntrack_context.last_ack = res->la;
+ conntrack_context.last_end = res->le;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_common = {
+ .f = cmd_set_conntrack_common_parsed,
+ .data = NULL,
+ .help_str = "set conntrack com peer <port_id> is_orig <dir> enable <en>"
+ " live <ack_seen> sack <en> cack <passed> last_dir <dir>"
+ " liberal <en> state <s> max_ack_win <factor> r_lim <num>"
+ " last_win <win> last_seq <seq> last_ack <ack> last_end <end>"
+ " last_index <flag>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_common_peer,
+ (void *)&cmd_set_conntrack_common_peer_value,
+ (void *)&cmd_set_conntrack_common_is_orig,
+ (void *)&cmd_set_conntrack_common_is_orig_value,
+ (void *)&cmd_set_conntrack_common_enable,
+ (void *)&cmd_set_conntrack_common_enable_value,
+ (void *)&cmd_set_conntrack_common_live,
+ (void *)&cmd_set_conntrack_common_live_value,
+ (void *)&cmd_set_conntrack_common_sack,
+ (void *)&cmd_set_conntrack_common_sack_value,
+ (void *)&cmd_set_conntrack_common_cack,
+ (void *)&cmd_set_conntrack_common_cack_value,
+ (void *)&cmd_set_conntrack_common_last_dir,
+ (void *)&cmd_set_conntrack_common_last_dir_value,
+ (void *)&cmd_set_conntrack_common_liberal,
+ (void *)&cmd_set_conntrack_common_liberal_value,
+ (void *)&cmd_set_conntrack_common_state,
+ (void *)&cmd_set_conntrack_common_state_value,
+ (void *)&cmd_set_conntrack_common_max_ackwin,
+ (void *)&cmd_set_conntrack_common_max_ackwin_value,
+ (void *)&cmd_set_conntrack_common_retrans,
+ (void *)&cmd_set_conntrack_common_retrans_value,
+ (void *)&cmd_set_conntrack_common_last_win,
+ (void *)&cmd_set_conntrack_common_last_win_value,
+ (void *)&cmd_set_conntrack_common_last_seq,
+ (void *)&cmd_set_conntrack_common_last_seq_value,
+ (void *)&cmd_set_conntrack_common_last_ack,
+ (void *)&cmd_set_conntrack_common_last_ack_value,
+ (void *)&cmd_set_conntrack_common_last_end,
+ (void *)&cmd_set_conntrack_common_last_end_value,
+ (void *)&cmd_set_conntrack_common_last_index,
+ (void *)&cmd_set_conntrack_common_last_index_value,
+ NULL,
+ },
+};
+
+/** Set connection tracking object both directions' details */
+struct cmd_set_conntrack_dir_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t dir;
+ cmdline_fixed_string_t scale;
+ cmdline_fixed_string_t fin;
+ cmdline_fixed_string_t ack_seen;
+ cmdline_fixed_string_t unack;
+ cmdline_fixed_string_t sent_end;
+ cmdline_fixed_string_t reply_end;
+ cmdline_fixed_string_t max_win;
+ cmdline_fixed_string_t max_ack;
+ uint32_t factor;
+ uint32_t f;
+ uint32_t as;
+ uint32_t un;
+ uint32_t se;
+ uint32_t re;
+ uint32_t mw;
+ uint32_t ma;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ dir, "orig#rply");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ scale, "scale");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ factor, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ fin, "fin");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ f, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ack_seen, "acked");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ as, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ unack, "unack_data");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ un, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ sent_end, "sent_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ se, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ reply_end, "reply_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ re, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_win, "max_win");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ mw, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_ack, "max_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ma, RTE_UINT32);
+
+static void cmd_set_conntrack_dir_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_dir_result *res = parsed_result;
+ struct rte_flow_tcp_dir_param *dir = NULL;
+
+ if (strcmp(res->dir, "orig") == 0)
+ dir = &conntrack_context.original_dir;
+ else if (strcmp(res->dir, "rply") == 0)
+ dir = &conntrack_context.reply_dir;
+ else
+ return;
+ dir->scale = res->factor;
+ dir->close_initiated = res->f;
+ dir->last_ack_seen = res->as;
+ dir->data_unacked = res->un;
+ dir->sent_end = res->se;
+ dir->reply_end = res->re;
+ dir->max_ack = res->ma;
+ dir->max_win = res->mw;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_dir = {
+ .f = cmd_set_conntrack_dir_parsed,
+ .data = NULL,
+ .help_str = "set conntrack orig|rply scale <factor> fin <sent>"
+ " acked <seen> unack_data <unack> sent_end <sent>"
+ " reply_end <reply> max_win <win> max_ack <ack>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_dir_dir,
+ (void *)&cmd_set_conntrack_dir_scale,
+ (void *)&cmd_set_conntrack_dir_scale_value,
+ (void *)&cmd_set_conntrack_dir_fin,
+ (void *)&cmd_set_conntrack_dir_fin_value,
+ (void *)&cmd_set_conntrack_dir_ack,
+ (void *)&cmd_set_conntrack_dir_ack_value,
+ (void *)&cmd_set_conntrack_dir_unack_data,
+ (void *)&cmd_set_conntrack_dir_unack_data_value,
+ (void *)&cmd_set_conntrack_dir_sent_end,
+ (void *)&cmd_set_conntrack_dir_sent_end_value,
+ (void *)&cmd_set_conntrack_dir_reply_end,
+ (void *)&cmd_set_conntrack_dir_reply_end_value,
+ (void *)&cmd_set_conntrack_dir_max_win,
+ (void *)&cmd_set_conntrack_dir_max_win_value,
+ (void *)&cmd_set_conntrack_dir_max_ack,
+ (void *)&cmd_set_conntrack_dir_max_ack_value,
+ NULL,
+ },
+};
+
/* Strict link priority scheduling mode setting */
static void
cmd_strict_link_prio_parsed(
@@ -17120,6 +17472,8 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
(cmdline_parse_inst_t *)&cmd_ddp_add,
(cmdline_parse_inst_t *)&cmd_ddp_del,
(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c5381c638b..d82b08c609 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,7 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_CONNTRACK,
/* Validate/create actions. */
ACTIONS,
@@ -431,6 +432,10 @@ enum index {
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_WIDTH,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -569,6 +574,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
struct mplsoudp_decap_conf mplsoudp_decap_conf;
+struct rte_flow_action_conntrack conntrack_context;
+
#define ACTION_SAMPLE_ACTIONS_NUM 10
#define RAW_SAMPLE_CONFS_MAX_NUM 8
/** Storage for struct rte_flow_action_sample including external data. */
@@ -968,6 +975,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_CONNTRACK,
END_SET,
ZERO,
};
@@ -1382,6 +1390,8 @@ static const enum index next_action[] = {
ACTION_SAMPLE,
ACTION_INDIRECT,
ACTION_MODIFY_FIELD,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
ZERO,
};
@@ -1650,6 +1660,13 @@ static const enum index action_modify_field_src[] = {
ZERO,
};
+static const enum index action_update_conntrack[] = {
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1740,6 +1757,10 @@ static int
parse_vc_modify_field_id(struct context *ctx, const struct token *token,
const char *str, unsigned int len, void *buf,
unsigned int size);
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size);
static int parse_destroy(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -3400,6 +3421,13 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "conntrack state",
+ .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -4498,6 +4526,34 @@ static const struct token token_list[] = {
.call = parse_vc_action_sample_index,
.comp = comp_set_sample_index,
},
+ [ACTION_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "create a conntrack object",
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_action_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE] = {
+ .name = "conntrack_update",
+ .help = "update a conntrack object",
+ .next = NEXT(action_update_conntrack),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_modify_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE_DIR] = {
+ .name = "dir",
+ .help = "update a conntrack object direction",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
+ [ACTION_CONNTRACK_UPDATE_CTX] = {
+ .name = "ctx",
+ .help = "update a conntrack object context",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
@@ -6304,6 +6360,42 @@ parse_vc_modify_field_id(struct context *ctx, const struct token *token,
return len;
}
+/** Parse the conntrack update, not a rte_flow_action. */
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size)
+{
+ struct buffer *out = buf;
+ struct rte_flow_modify_conntrack *ct_modify = NULL;
+
+ (void)size;
+ if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
+ ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
+ return -1;
+ /* Token name must match. */
+ if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+ return -1;
+ ct_modify = (struct rte_flow_modify_conntrack *)out->args.vc.data;
+ /* Nothing else to do if there is no buffer. */
+ if (!out)
+ return len;
+ if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
+ ct_modify->new_ct.is_original_dir =
+ conntrack_context.is_original_dir;
+ ct_modify->direction = 1;
+ } else {
+ uint32_t old_dir;
+
+ old_dir = ct_modify->new_ct.is_original_dir;
+ memcpy(&ct_modify->new_ct, &conntrack_context,
+ sizeof(conntrack_context));
+ ct_modify->new_ct.is_original_dir = old_dir;
+ ct_modify->state = 1;
+ }
+ return len;
+}
+
/** Parse tokens for destroy command. */
static int
parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index c219ef25f7..02b7d4719a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1484,6 +1484,11 @@ port_action_handle_create(portid_t port_id, uint32_t id,
pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
age->context = &pia->age_type;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
+ struct rte_flow_action_conntrack *ct =
+ (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf);
+
+ memcpy(ct, &conntrack_context, sizeof(*ct));
}
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x22, sizeof(error));
@@ -1565,11 +1570,24 @@ port_action_handle_update(portid_t port_id, uint32_t id,
{
struct rte_flow_error error;
struct rte_flow_action_handle *action_handle;
+ struct port_indirect_action *pia;
+ const void *update;
action_handle = port_action_handle_get_by_id(port_id, id);
if (!action_handle)
return -EINVAL;
- if (rte_flow_action_handle_update(port_id, action_handle, action,
+ pia = action_get_by_id(port_id, id);
+ if (!pia)
+ return -EINVAL;
+ switch (pia->type) {
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ update = action->conf;
+ break;
+ default:
+ update = action;
+ break;
+ }
+ if (rte_flow_action_handle_update(port_id, action_handle, update,
&error)) {
return port_flow_complain(&error);
}
@@ -1622,6 +1640,51 @@ port_action_handle_query(portid_t port_id, uint32_t id)
}
data = NULL;
break;
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ if (!ret) {
+ struct rte_flow_action_conntrack *ct = data;
+
+ printf("Conntrack Context:\n"
+ " Peer: %u, Flow dir: %s, Enable: %u\n"
+ " Live: %u, SACK: %u, CACK: %u\n"
+ " Packet dir: %s, Liberal: %u, State: %u\n"
+ " Factor: %u, Retrans: %u, TCP flags: %u\n"
+ " Last Seq: %u, Last ACK: %u\n"
+ " Last Win: %u, Last End: %u\n",
+ ct->peer_port,
+ ct->is_original_dir ? "Original" : "Reply",
+ ct->enable, ct->live_connection,
+ ct->selective_ack, ct->challenge_ack_passed,
+ ct->last_direction ? "Original" : "Reply",
+ ct->liberal_mode, ct->state,
+ ct->max_ack_window, ct->retransmission_limit,
+ ct->last_index, ct->last_seq, ct->last_ack,
+ ct->last_window, ct->last_end);
+ printf(" Original Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->original_dir.scale,
+ ct->original_dir.close_initiated,
+ ct->original_dir.last_ack_seen,
+ ct->original_dir.data_unacked,
+ ct->original_dir.sent_end,
+ ct->original_dir.reply_end,
+ ct->original_dir.max_win,
+ ct->original_dir.max_ack);
+ printf(" Reply Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->reply_dir.scale,
+ ct->reply_dir.close_initiated,
+ ct->reply_dir.last_ack_seen,
+ ct->reply_dir.data_unacked,
+ ct->reply_dir.sent_end, ct->reply_dir.reply_end,
+ ct->reply_dir.max_win, ct->reply_dir.max_ack);
+ }
+ data = NULL;
+ break;
default:
printf("Indirect action %u (type: %d) on port %u doesn't"
" support query\n", id, pia->type, port_id);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c314b30f2e..9530ec5fe0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf mplsoudp_decap_conf;
extern enum rte_eth_rx_mq_mode rx_mq_mode;
+extern struct rte_flow_action_conntrack conntrack_context;
+
static inline unsigned int
lcore_num(void)
{
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] doc: update for conntrack
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add CLI for conntrack Bing Zhao
@ 2021-04-16 17:54 ` Bing Zhao
2021-04-16 18:22 ` Thomas Monjalon
2021-04-16 18:30 ` Ajit Khaparde
2 siblings, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 17:54 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
The updated documentations include:
1. Release notes
2. rte_flow.rst
3. testpmd user guide
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 113 ++++++++++++++++++++
doc/guides/rel_notes/release_21_05.rst | 4 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++++++
3 files changed, 152 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2ecc48cfff..a1333819fc 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,14 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^
+
+Matches a conntrack state after conntrack action.
+
+- ``flags``: conntrack packet state flags.
+- Default ``mask`` matches all state bits.
+
Actions
~~~~~~~
@@ -2842,6 +2850,111 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
| ``value`` | immediate value or a pointer to this value |
+---------------+----------------------------------------------------------+
+Action: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^^^
+
+Create a conntrack (connection tracking) context with the provided information.
+
+In stateful session like TCP, the conntrack action provides the ability to
+examine every packet of this connection and associate the state to every
+packet. It will help to realize the stateful offloading with little software
+participation. For example, only the control packets like SYN / FIN or packets
+with invalid state should be handled by the software.
+
+A conntrack context should be created via ``rte_flow_action_handle_create()``
+before using. Then the handle with ``INDIRECT`` type is used for a flow rule
+creation. If a flow rule with an opposite direction needs to be created, the
+``rte_flow_action_handle_update()`` should be used to modify the direction.
+
+Not all the fields of the ``struct rte_flow_action_conntrack`` will be used
+for a conntrack context creating, depending on the HW.
+The ``struct rte_flow_modify_conntrack`` should be used for an updating.
+
+The current conntrack context information could be queried via the
+``rte_flow_action_handle_query()`` interface.
+
+.. _table_rte_flow_action_conntrack:
+
+.. table:: CONNTRACK
+
+ +--------------------------+-------------------------------------------------------------+
+ | Field | Value |
+ +==========================+=============================================================+
+ | ``peer_port`` | peer port number |
+ +--------------------------+-------------------------------------------------------------+
+ | ``is_original_dir`` | direction of this connection for flow rule creating |
+ +--------------------------+-------------------------------------------------------------+
+ | ``enable`` | enable the conntrack context |
+ +--------------------------+-------------------------------------------------------------+
+ | ``live_connection`` | one ack was seen for this connection |
+ +--------------------------+-------------------------------------------------------------+
+ | ``selective_ack`` | SACK enabled |
+ +--------------------------+-------------------------------------------------------------+
+ | ``challenge_ack_passed`` | a challenge ack has passed |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_direction`` | direction of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``liberal_mode`` | only report state change |
+ +--------------------------+-------------------------------------------------------------+
+ | ``state`` | current state |
+ +--------------------------+-------------------------------------------------------------+
+ | ``max_ack_window`` | maximal window scaling factor |
+ +--------------------------+-------------------------------------------------------------+
+ | ``retransmission_limit`` | maximal retransmission times |
+ +--------------------------+-------------------------------------------------------------+
+ | ``original_dir`` | TCP parameters of the original direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``reply_dir`` | TCP parameters of the reply direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_window`` | window value of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_seq`` | sequence value of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_ack`` | acknowledgement value the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_end`` | sum acknowledgement and length value the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+
+.. _table_rte_flow_tcp_dir_param:
+
+.. table:: configuration parameters for each direction
+
+ +---------------------+---------------------------------------------------------+
+ | Field | Value |
+ +=====================+=========================================================+
+ | ``scale`` | TCP window scaling factor |
+ +---------------------+---------------------------------------------------------+
+ | ``close_initiated`` | FIN sent from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``last_ack_seen`` | an ACK packet received |
+ +---------------------+---------------------------------------------------------+
+ | ``data_unacked`` | unacknowledged data for packets from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``sent_end`` | max{seq + len} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``reply_end`` | max{sack + max{win, 1}} seen in reply packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_win`` | max{max{win, 1}} + {sack - ack} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_ack`` | max{ack} + seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+
+.. _table_rte_flow_modify_conntrack:
+
+.. table:: update a conntrack context
+
+ +----------------+---------------------------------------+
+ | Field | Value |
+ +================+=======================================+
+ | ``new_ct`` | new conntrack information |
+ +----------------+---------------------------------------+
+ | ``direction`` | direction will be updated |
+ +----------------+---------------------------------------+
+ | ``state`` | other fields except will be updated |
+ +----------------+---------------------------------------+
+ | ``reserved`` | reserved bits |
+ +----------------+---------------------------------------+
+
Negative types
~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index e6f99350af..824eb72981 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -183,6 +183,10 @@ New Features
the events across multiple stages.
* This also reduced the scheduling overhead on a event device.
+* **Added conntrack support for rte_flow.**
+
+ * Added conntrack action and item for stateful offloading.
+
* **Updated testpmd.**
* Added a command line option to configure forced speed for Ethernet port.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 1fa6e2000e..4c029776aa 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3791,6 +3791,8 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``conntrack``: match conntrack state.
+
Actions list
^^^^^^^^^^^^
@@ -4925,6 +4927,39 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample conntrack rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Conntrack rules can be set by the following commands
+
+Need to construct the connection context with provided information.
+In the first table, create a flow rule by using conntrack action and jump to
+the next table. In the next table, create a rule to check the state.
+
+::
+
+ testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0
+ last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510
+ last_seq 2632987379 last_ack 2532480967 last_end 2632987379
+ last_index 0x8
+ testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2632987379 reply_end 2633016339 max_win 28960
+ max_ack 2632987379
+ testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2532480967 reply_end 2532546247 max_win 65280
+ max_ack 2532480967
+ testpmd> flow indirect_action 0 create ingress action conntrack / end
+ testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp / end actions indirect 0 / jump group 5 / end
+ testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack is 1 / end actions queue index 5 / end
+
+Construct the conntrack again with only "is_orig" set to 0 (other fields are
+ignored), then use "update" interface to update the direction. Create flow
+rules like above for the peer port.
+
+::
+
+ testpmd> flow indirect_action 0 update 0 action conntrack_update dir / end
+
BPF Functions
--------------
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-16 12:41 ` Ori Kam
@ 2021-04-16 18:05 ` Bing Zhao
2021-04-16 21:47 ` Ajit Khaparde
0 siblings, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 18:05 UTC (permalink / raw)
To: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Ori,
My comments are inline, PSB.
> -----Original Message-----
> From: Ori Kam <orika@nvidia.com>
> Sent: Friday, April 16, 2021 8:42 PM
> To: Bing Zhao <bingz@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; ajit.khaparde@broadcom.com
> Subject: RE: [PATCH v2 1/2] ethdev: introduce conntrack flow action
> and item
>
> Hi Bing,
>
> One more thought, PSB
>
> Best,
> Ori
> > -----Original Message-----
> > From: Bing Zhao <bingz@nvidia.com>
> > Sent: Thursday, April 15, 2021 7:41 PM
> > To: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; ferruh.yigit@intel.com;
> > andrew.rybchenko@oktetlabs.ru
> > Cc: dev@dpdk.org; ajit.khaparde@broadcom.com
> > Subject: [PATCH v2 1/2] ethdev: introduce conntrack flow action
> and
> > item
> >
> > This commit introduced the conntrack action and item.
> >
> > Usually the HW offloading is stateless. For some stateful
> offloading
> > like a TCP connection, HW module will help provide the ability of
> a
> > full offloading w/o SW participation after the connection was
> > established.
> >
> > The basic usage is that in the first flow the application should
> add
> > the conntrack action and in the following flow(s) the application
> > should use the conntrack item to match on the result.
> >
> > A TCP connection has two directions traffic. To set a conntrack
> action
> > context correctly, information from packets of both directions are
> > required.
> >
> > The conntrack action should be created on one port and supply the
> peer
> > port as a parameter to the action. After context creating, it
> could
> > only be used between the ports (dual-port mode) or a single port.
> The
> > application should modify the action via the API
> > "action_handle_update" only when before using it to create a flow
> with
> > opposite direction. This will help the driver to recognize the
> > direction of the flow to be created, especially in single port
> mode.
> > The traffic from both directions will go through the same port if
> the
> > application works as an "forwarding engine" but not a end point.
> > There is no need to call the update interface if the subsequent
> flows
> > have nothing to be changed.
> >
> > Query will be supported via action_ctx_query interface, about the
> > current packets information and connection status. Tha fields
> query
> > capabilities depends on the HW.
> >
> > For the packets received during the conntrack setup, it is
> suggested
> > to re-inject the packets in order to take full advantage of the
> > conntrack. Only the valid packets should pass the conntrack,
> packets
> > with invalid TCP information, like out of window, or with invalid
> > header, like malformed, should not pass.
> >
> > Naming and definition:
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Finclude%2Fuapi%2Flinux%2F
> ne
> >
> tfilter%2Fnf_co&data=04%7C01%7Cbingz%40nvidia.com%7C29da48bebdc9
> 44
> >
> b0127508d900d4f89a%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C6375
> 41
> >
> 736960852707%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> uM
> >
> zIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2Bbsd48yNzMbhUyp
> In
> > kol%2B7LskVzx1WHj%2Fkd%2Fu0zks0A%3D&reserved=0
> > nntrack_tcp.h
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fnet%2Fnetfilter%2Fnf_conn
> tr
> >
> ack_proto_&data=04%7C01%7Cbingz%40nvidia.com%7C29da48bebdc944b01
> 27
> >
> 508d900d4f89a%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637541736
> 96
> >
> 0852707%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIi
> LC
> >
> JBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=CwKk%2FgQWxRY22%2BAaCF
> OP
> > 1TbGphcqURrBFSf4NupMPPA%3D&reserved=0
> > tcp.c
> >
> > Other reference:
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww
> w.
> >
> usenix.org%2Flegacy%2Fevents%2Fsec01%2Finvitedtalks%2Frooij.pdf&
> da
> >
> ta=04%7C01%7Cbingz%40nvidia.com%7C29da48bebdc944b0127508d900d4f89a%7
> C4
> >
> 3083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637541736960852707%7CUnkno
> wn
> > %7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWw
> iLCJ
> >
> XVCI6Mn0%3D%7C1000&sdata=geKUBifelEBuFzQviu%2FPeV19DOjWzZAbdAlo%
> 2B
> > cVX%2FXs%3D&reserved=0
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > lib/librte_ethdev/rte_flow.c | 2 +
> > lib/librte_ethdev/rte_flow.h | 195
> > +++++++++++++++++++++++++++++++++++
> > 2 files changed, 197 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.c
> > b/lib/librte_ethdev/rte_flow.c index 27a161559d..0af601d508 100644
> > --- a/lib/librte_ethdev/rte_flow.c
> > +++ b/lib/librte_ethdev/rte_flow.c
> > @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data
> > rte_flow_desc_item[] = {
> > MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
> > MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
> > MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct
> rte_flow_item_geneve_opt)),
> > + MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
> > };
> >
> > /** Generate flow_action[] entry. */
> > @@ -186,6 +187,7 @@ static const struct rte_flow_desc_data
> > rte_flow_desc_action[] = {
> > * indirect action handle.
> > */
> > MK_FLOW_ACTION(INDIRECT, 0),
> > + MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> > rte_flow_action_conntrack)),
> > };
> >
> > int
> > diff --git a/lib/librte_ethdev/rte_flow.h
> > b/lib/librte_ethdev/rte_flow.h index 91ae25b1da..024d1a2026 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches conntrack state.
> > + *
> > + * See struct rte_flow_item_conntrack.
> > + */
> > + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = { };
> #endif
> >
> > +/**
> > + * The packet is with valid state after conntrack checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
> > +/**
> > + * The state of the connection was changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> > +/**
> > + * Error is detected on this packet for this connection and
> > + * an invalid state is set.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
> > +/**
> > + * The packet contains some bad field(s) and cannot continue
> > + * with the conntrack module checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> > + *
> > + * Matches the state of a packet after it passed the connection
> > +tracking
> > + * examination. The state is a bit mask of one
> > +RTE_FLOW_CONNTRACK_FLAG*
> > + * or a reasonable combination of these bits.
> > + */
> > +struct rte_flow_item_conntrack {
> > + uint32_t flags;
> > +};
> > +
> > +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */ #ifndef
> > +__cplusplus static const struct rte_flow_item_conntrack
> > +rte_flow_item_conntrack_mask =
> > {
> > + .flags = 0xffffffff,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > @@ -2277,6 +2331,17 @@ enum rte_flow_action_type {
> > * same port or across different ports.
> > */
> > RTE_FLOW_ACTION_TYPE_INDIRECT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Enable tracking a TCP connection state.
> > + *
> > + * Send packet to HW connection tracking module for
> examination.
> > + *
> > + * See struct rte_flow_action_conntrack.
> > + */
> > + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -2875,6 +2940,136 @@ struct rte_flow_action_set_dscp {
> > */
> > struct rte_flow_action_handle;
> >
> > +/**
> > + * The state of a TCP connection.
> > + */
> > +enum rte_flow_conntrack_state {
> > + /**< SYN-ACK packet was seen. */
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< 3-way handshark was done. */
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< First FIN packet was received to close the connection. */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< Second FIN was received, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< Second FIN was ACKed, connection was closed. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > +};
> > +
> > +/**
> > + * The last passed TCP packet flags of a connection.
> > + */
> > +enum rte_flow_conntrack_tcp_last_index {
> > + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK
> > flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag.
> */ };
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * Configuration parameters for each direction of a TCP
> connection.
> > + */
> > +struct rte_flow_tcp_dir_param {
> > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> disable. */
> > + uint32_t close_initiated:1; /**< The FIN was sent by this
> direction. */
> > + /**< An ACK packet has been received by this side. */
> > + uint32_t last_ack_seen:1;
> > + /**< If set, indicates that there is unacked data of the
> connection. */
> > + uint32_t data_unacked:1;
> > + /**< Maximal value of sequence + payload length over sent
> > + * packets (next ACK from the opposite direction).
> > + */
> > + uint32_t sent_end;
> > + /**< Maximal value of (ACK + window size) over received packet
> +
> > length
> > + * over sent packet (maximal sequence could be sent).
> > + */
> > + uint32_t reply_end;
>
> This comment is for all members that are part of the packet, Do you
> think it should be in network order?
Almost none of the fields are part of the packet. Indeed, most of them are calculated from the packets information. So I prefer to keep the host order easy for using and
keep all the fields of the whole structure the same endianness format.
What do you think?
> I can see the advantage in both ways nice I assume the app needs
> this data in host byte-order but since in most other cases we use
> network byte-order to set values that are coming from the packet
> itself maybe it is better to use network byte-order (will also save
> the conversion)
Only the seq/ack/window in the common part are part of the packets, others are not.
BTW, should we support liberal mode separately for both direction as some "half-duplex". One direction could work normally and the opposite direct will work in the liberal mode?
>
> > + /**< Maximal value of actual window size over sent packets. */
> > + uint32_t max_win;
> > + /**< Maximal value of ACK over sent packets. */
> > + uint32_t max_ack;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Configuration and initial state for the connection tracking
> module.
> > + * This structure could be used for both setting and query.
> > + */
> > +struct rte_flow_action_conntrack {
> > + uint16_t peer_port; /**< The peer port number, can be the same
> port.
> > */
> > + /**< Direction of this connection when creating a flow, the
> value only
> > + * affects the subsequent flows creation.
> > + */
> > + uint32_t is_original_dir:1;
> > + /**< Enable / disable the conntrack HW module. When disabled,
> the
> > + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> > + * In this state the HW will act as passthrough.
> > + * It only affects this conntrack object in the HW without any
> effect
> > + * to the other objects.
> > + */
> > + uint32_t enable:1;
> > + /**< At least one ack was seen, after the connection was
> established.
> > */
> > + uint32_t live_connection:1;
> > + /**< Enable selective ACK on this connection. */
> > + uint32_t selective_ack:1;
> > + /**< A challenge ack has passed. */
> > + uint32_t challenge_ack_passed:1;
> > + /**< 1: The last packet is seen that comes from the original
> direction.
> > + * 0: From the reply direction.
> > + */
> > + uint32_t last_direction:1;
> > + /**< No TCP check will be done except the state change. */
> > + uint32_t liberal_mode:1;
> > + /**< The current state of the connection. */
> > + enum rte_flow_conntrack_state state;
> > + /**< Scaling factor for maximal allowed ACK window. */
> > + uint8_t max_ack_window;
> > + /**< Maximal allowed number of retransmission times. */
> > + uint8_t retransmission_limit;
> > + /**< TCP parameters of the original direction. */
> > + struct rte_flow_tcp_dir_param original_dir;
> > + /**< TCP parameters of the reply direction. */
> > + struct rte_flow_tcp_dir_param reply_dir;
> > + /**< The window value of the last packet passed this conntrack.
> */
> > + uint16_t last_window;
> > + enum rte_flow_conntrack_tcp_last_index last_index;
> > + /**< The sequence of the last packet passed this conntrack. */
> > + uint32_t last_seq;
> > + /**< The acknowledgement of the last packet passed this
> conntrack. */
> > + uint32_t last_ack;
> > + /**< The total value ACK + payload length of the last packet
> passed
> > + * this conntrack.
> > + */
> > + uint32_t last_end;
> > +};
> > +
> > +/**
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Wrapper structure for the context update interface.
> > + * Ports cannot support updating, and the only valid solution is
> to
> > + * destroy the old context and create a new one instead.
> > + */
> > +struct rte_flow_modify_conntrack {
> > + /**< New connection tracking parameters to be updated. */
> > + struct rte_flow_action_conntrack new_ct;
> > + uint32_t direction:1; /**< The direction field will be updated.
> */
> > + /**< All the other fields except direction will be updated. */
> > + uint32_t state:1;
> > + uint32_t reserved:30; /**< Reserved bits for the future usage.
> */ };
> > +
> > /**
> > * Field IDs for MODIFY_FIELD action.
> > */
> > --
> > 2.19.0.windows.1
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-16 10:49 ` Thomas Monjalon
@ 2021-04-16 18:18 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 18:18 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon
Cc: Ori Kam, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde,
jerinj, humin29, rosen.xu, hemant.agrawal
Hi Thomas,
Thanks for your comments. Almost all the comments are addressed.
PSB.
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Friday, April 16, 2021 6:50 PM
> To: Bing Zhao <bingz@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru; dev@dpdk.org;
> ajit.khaparde@broadcom.com; jerinj@marvell.com; humin29@huawei.com;
> rosen.xu@intel.com; hemant.agrawal@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack
> flow action and item
>
> External email: Use caution opening links or attachments
>
>
> 15/04/2021 18:41, Bing Zhao:
> > This commit introduced the conntrack action and item.
> >
> > Usually the HW offloading is stateless. For some stateful
> offloading
> > like a TCP connection, HW module will help provide the ability of
> a
> > full offloading w/o SW participation after the connection was
> > established.
> >
> > The basic usage is that in the first flow the application should
> add
> > the conntrack action and in the following flow(s) the application
> > should use the conntrack item to match on the result.
>
> You probably mean "flow rule", not "traffic flow".
> Please make it clear to avoid confusion.
Done
>
> > A TCP connection has two directions traffic. To set a conntrack
> action
> > context correctly, information from packets of both directions are
> > required.
> >
> > The conntrack action should be created on one port and supply the
> peer
> > port as a parameter to the action. After context creating, it
> could
> > only be used between the ports (dual-port mode) or a single port.
> The
> > application should modify the action via the API
> > "action_handle_update" only when before using it to create a flow
> with
> > opposite direction. This will help the driver to recognize the
> > direction of the flow to be created, especially in single port
> mode.
> > The traffic from both directions will go through the same port if
> the
> > application works as an "forwarding engine" but not a end point.
> > There is no need to call the update interface if the subsequent
> flows
> > have nothing to be changed.
>
> I am not sure this is a feature description for the commit log or an
> usage explanation for the doc.
> In any case, please distinguish "ethdev port" and "TCP port"
> to avoid confusion.
Changed, thanks.
>
> > Query will be supported via action_ctx_query interface, about the
> > current packets information and connection status. Tha fields
> query
> > capabilities depends on the HW.
> >
> > For the packets received during the conntrack setup, it is
> suggested
> > to re-inject the packets in order to take full advantage of the
>
> What do you mean by "full advantage"?
> It is counter-intuitive to re-inject for offloading.
> Does it improve the performance?
No, it is not for the performance but for the functionality correctness. Before the CT established, some data+ack packets may already be received by the SW, and the application will use the initial information to setup a conntrack. It may result into some error checking for the following packets. By re-injecting the packets already received by SW before the established CT, it will make the HW have all the packets information and check the following packets correctly.
>
> > conntrack. Only the valid packets should pass the conntrack,
> packets
> > with invalid TCP information, like out of window, or with invalid
> > header, like malformed, should not pass.
> >
> > Naming and definition:
>
> You mean naming is inspired from Linux?
The naming and critical fields definition. The original idea are from the paper listed below (correct me if I was wrong), and there is some well-known definition in this area, to my understanding, it would be better to follow it.
>
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Finclude%2Fuapi%2Flinux%2F
> ne
> >
> tfilter%2Fnf_conntrack_tcp.h&data=04%7C01%7Cbingz%40nvidia.com%7
> Ca
> >
> d68b128653e4428da6e08d900c56373%7C43083d15727340c1b7db39efd9ccc17a%7
> C0
> > %7C1%7C637541670056627642%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAw
> MDAi
> >
> LCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=kCn9
> kF
> > bi7yWrd7A94zFibvQEB97phXXUudSA%2BhAueTU%3D&reserved=0
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fel
> ix
> >
> ir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fnet%2Fnetfilter%2Fnf_conn
> tr
> >
> ack_proto_tcp.c&data=04%7C01%7Cbingz%40nvidia.com%7Cad68b128653e
> 44
> >
> 28da6e08d900c56373%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C6375
> 41
> >
> 670056627642%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> uM
> >
> zIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Ajs9NtCaEpG2Kfnjy
> t5
> > X8uwOFo2HyfMdWZbx%2BHkbvX8%3D&reserved=0
> >
> > Other reference:
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww
> w.
> >
> usenix.org%2Flegacy%2Fevents%2Fsec01%2Finvitedtalks%2Frooij.pdf&
> da
> >
> ta=04%7C01%7Cbingz%40nvidia.com%7Cad68b128653e4428da6e08d900c56373%7
> C4
> >
> 3083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C637541670056627642%7CUnkno
> wn
> > %7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWw
> iLCJ
> >
> XVCI6Mn0%3D%7C1000&sdata=987HVU%2FefoJ40%2B6aM0Q1RVxcGH5nVJS4bzy
> 4A
> > 4ZoYSE%3D&reserved=0
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> [...]
> > + /**
> > + * [META]
> > + *
> > + * Matches conntrack state.
> > + *
> > + * See struct rte_flow_item_conntrack.
>
> Please use @see for hyperlink in doxygen.
>
> > + */
> > + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> > };
> [...]
> > +/**
> > + * The packet is with valid state after conntrack checking.
>
> "is with valid state" looks strange.
> I propose "The packet is valid after conntrack checking."
Done
>
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_VALID (1 << 0)
>
> Please use RTE_BIT32().
Done
>
> > +/**
> > + * The state of the connection was changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_CHANGED (1 << 1)
> > +/**
> > + * Error is detected on this packet for this connection and
> > + * an invalid state is set.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVAL (1 << 2)
>
> "INVAL" is strange. Can we add the missing 2 characters?
> RTE_FLOW_CONNTRACK_FLAG_PKT_STATE_INVALID
>
> On a related note, do we really need the word FLAG?
> And it is conflicting with the prefix in enum
> rte_flow_conntrack_tcp_last_index I think
> RTE_FLOW_CONNTRACK_PKT_STATE_ is a good prefix, long enough.
>
Done
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_HW_DISABLED (1 << 3)
>
> This one does not have PKT in its name.
> And it is limiting to HW, while the driver could implement conntrack
> in SW.
> I propose RTE_FLOW_CONNTRACK_PKT_DISABLED
>
Done
> > +/**
> > + * The packet contains some bad field(s) and cannot continue
> > + * with the conntrack module checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_FLAG_PKT_BAD (1 << 4)
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> > + *
> > + * Matches the state of a packet after it passed the connection
> > +tracking
> > + * examination. The state is a bit mask of one
> > +RTE_FLOW_CONNTRACK_FLAG*
>
> s/bit mask/bitmap/ ?
Done
>
> RTE_FLOW_CONNTRACK_PKT_STATE_*
> otherwise it is messed with rte_flow_conntrack_tcp_last_index
>
> > + * or a reasonable combination of these bits.
> > + */
> > +struct rte_flow_item_conntrack {
> > + uint32_t flags;
> > +};
> [...]
> > +
> > + /**
> > + * [META]
> > + *
> > + * Enable tracking a TCP connection state.
> > + *
> > + * Send packet to HW connection tracking module for
> examination.
>
> Not necessarily HW.
> No packet is sent.
> I think you can remove this sentence completely.
>
Done
> > + *
> > + * See struct rte_flow_action_conntrack.
>
> @see
>
> > + */
> > + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> > };
> >
> > /**
> > @@ -2875,6 +2940,136 @@ struct rte_flow_action_set_dscp {
> > */
> > struct rte_flow_action_handle;
> >
> > +/**
> > + * The state of a TCP connection.
> > + */
> > +enum rte_flow_conntrack_state {
> > + /**< SYN-ACK packet was seen. */
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< 3-way handshark was done. */
>
> s/handshark/handshake/
>
Done
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< First FIN packet was received to close the connection.
> */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< Second FIN was received, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< Second FIN was ACKed, connection was closed. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > +};
> > +
> > +/**
> > + * The last passed TCP packet flags of a connection.
> > + */
> > +enum rte_flow_conntrack_tcp_last_index {
> > + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_SYN = (1 << 0), /**< With SYN flag.
> */
> > + RTE_FLOW_CONNTRACK_FLAG_SYNACK = (1 << 1), /**< With SYN+ACK
> flag. */
> > + RTE_FLOW_CONNTRACK_FLAG_FIN = (1 << 2), /**< With FIN flag.
> */
> > + RTE_FLOW_CONNTRACK_FLAG_ACK = (1 << 3), /**< With ACK flag.
> */
> > + RTE_FLOW_CONNTRACK_FLAG_RST = (1 << 4), /**< With RST flag.
> */
> > +};
>
> Please use RTE_BIT32().
>
Done
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * Configuration parameters for each direction of a TCP
> connection.
> > + */
> > +struct rte_flow_tcp_dir_param {
> > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> disable. */
> > + uint32_t close_initiated:1; /**< The FIN was sent by this
> direction. */
> > + /**< An ACK packet has been received by this side. */
>
> Move all comments on their own line before the struct member.
> Comment should then start with /**
>
All done, BTW, I see in the current code, the format "/**<" is used in a lot of parts.
> > + uint32_t last_ack_seen:1;
> > + /**< If set, indicates that there is unacked data of the
> > + connection. */
>
> not sure what means "unacked data of the connection"
Updated the description, it means some packets were sent but not all of them are ACKed.
>
> > + uint32_t data_unacked:1;
> > + /**< Maximal value of sequence + payload length over sent
> > + * packets (next ACK from the opposite direction).
> > + */
> > + uint32_t sent_end;
> > + /**< Maximal value of (ACK + window size) over received
> packet + length
> > + * over sent packet (maximal sequence could be sent).
> > + */
> > + uint32_t reply_end;
> > + /**< Maximal value of actual window size over sent packets.
> */
> > + uint32_t max_win;
> > + /**< Maximal value of ACK over sent packets. */
> > + uint32_t max_ack;
>
> Not sure about the word "over" in above definitions.
Changed to "in"
>
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior
> notice
> > + *
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Configuration and initial state for the connection tracking
> module.
> > + * This structure could be used for both setting and query.
> > + */
> > +struct rte_flow_action_conntrack {
> > + uint16_t peer_port; /**< The peer port number, can be the
> same port. */
> > + /**< Direction of this connection when creating a flow, the
> value only
> > + * affects the subsequent flows creation.
> > + */
>
> As for rte_flow_tcp_dir_param, better to move comments before, on
> their own line.
>
> > + uint32_t is_original_dir:1;
> > + /**< Enable / disable the conntrack HW module. When disabled,
> the
> > + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> > + * In this state the HW will act as passthrough.
> > + * It only affects this conntrack object in the HW without
> any effect
> > + * to the other objects.
> > + */
> > + uint32_t enable:1;
> > + /**< At least one ack was seen, after the connection was
> established. */
> > + uint32_t live_connection:1;
> > + /**< Enable selective ACK on this connection. */
> > + uint32_t selective_ack:1;
> > + /**< A challenge ack has passed. */
> > + uint32_t challenge_ack_passed:1;
> > + /**< 1: The last packet is seen that comes from the original
> direction.
> > + * 0: From the reply direction.
> > + */
> > + uint32_t last_direction:1;
> > + /**< No TCP check will be done except the state change. */
> > + uint32_t liberal_mode:1;
> > + /**< The current state of the connection. */
> > + enum rte_flow_conntrack_state state;
> > + /**< Scaling factor for maximal allowed ACK window. */
> > + uint8_t max_ack_window;
> > + /**< Maximal allowed number of retransmission times. */
> > + uint8_t retransmission_limit;
> > + /**< TCP parameters of the original direction. */
> > + struct rte_flow_tcp_dir_param original_dir;
> > + /**< TCP parameters of the reply direction. */
> > + struct rte_flow_tcp_dir_param reply_dir;
> > + /**< The window value of the last packet passed this
> conntrack. */
> > + uint16_t last_window;
> > + enum rte_flow_conntrack_tcp_last_index last_index;
> > + /**< The sequence of the last packet passed this conntrack.
> */
> > + uint32_t last_seq;
> > + /**< The acknowledgement of the last packet passed this
> conntrack. */
> > + uint32_t last_ack;
> > + /**< The total value ACK + payload length of the last packet
> passed
> > + * this conntrack.
> > + */
> > + uint32_t last_end;
> > +};
> > +
> > +/**
> > + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> > + *
> > + * Wrapper structure for the context update interface.
> > + * Ports cannot support updating, and the only valid solution is
> to
> > + * destroy the old context and create a new one instead.
> > + */
> > +struct rte_flow_modify_conntrack {
> > + /**< New connection tracking parameters to be updated. */
> > + struct rte_flow_action_conntrack new_ct;
> > + uint32_t direction:1; /**< The direction field will be
> updated. */
> > + /**< All the other fields except direction will be updated.
> */
> > + uint32_t state:1;
> > + uint32_t reserved:30; /**< Reserved bits for the future
> usage.
> > +*/ };
>
>
Thanks
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack
2021-04-16 8:46 ` Ori Kam
@ 2021-04-16 18:20 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-16 18:20 UTC (permalink / raw)
To: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde
Hi Ori,
> -----Original Message-----
> From: Ori Kam <orika@nvidia.com>
> Sent: Friday, April 16, 2021 4:47 PM
> To: Bing Zhao <bingz@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru
> Cc: dev@dpdk.org; ajit.khaparde@broadcom.com
> Subject: RE: [PATCH v2 2/2] app/testpmd: add CLI for conntrack
>
> Hi Bing,
> 1. you are missing the documentation patch.
> doc/guides/testpmd_app_ug/testpmd_funcs.rst
> please make sure that you add examples at the and of fil.
> you can see example in the integrity patch.
Simple example is added for doc in the new patch set.
>
> > -----Original Message-----
> > From: Bing Zhao <bingz@nvidia.com>
> > Sent: Thursday, April 15, 2021 7:41 PM
> > Subject: [PATCH v2 2/2] app/testpmd: add CLI for conntrack
> >
> > The command line for testing connection tracking is added. To
> create a
> > conntrack object, 3 parts are needed.
> > set conntrack com peer ...
> > set conntrack orig scale ...
> > set conntrack rply scale ...
> > This will create a full conntrack action structure for the
> indirect
> > action. After the indirect action handle of "conntrack" created,
> it
> > could be used in the flow creation. Before updating, the same
> > structure is also needed together with the update command
> > "conntrack_update" to update the "dir" or "ctx".
> >
> > After the flow with conntrack action created, the packet should
> jump
> > to the next flow for the result checking with conntrack item. The
> > state is defined with bits and a valid combination could be
> supported.
> >
> Can you please add more detail examples?
> Also what is the command to update and use the connection tracking
> action and item.
>
Not sure if all the details should be listed here, maybe the doc is enough?
>
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > app/test-pmd/cmdline.c | 354
> ++++++++++++++++++++++++++++++++++++
> > app/test-pmd/cmdline_flow.c | 92 ++++++++++
> > app/test-pmd/config.c | 65 ++++++-
> > app/test-pmd/testpmd.h | 2 +
> > 4 files changed, 512 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > c28a3d2e5d..58ab7191d6 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -13618,6 +13618,358 @@ cmdline_parse_inst_t
> > cmd_set_mplsoudp_decap_with_vlan = {
> > },
> > };
> >
> > +/** Set connection tracking object common details */ struct
> > +cmd_set_conntrack_common_result {
> > + cmdline_fixed_string_t set;
> > + cmdline_fixed_string_t conntrack;
> > + cmdline_fixed_string_t common;
> > + cmdline_fixed_string_t peer;
> > + cmdline_fixed_string_t is_orig;
> > + cmdline_fixed_string_t enable;
> > + cmdline_fixed_string_t live;
> > + cmdline_fixed_string_t sack;
> > + cmdline_fixed_string_t cack;
> > + cmdline_fixed_string_t last_dir;
> > + cmdline_fixed_string_t liberal;
> > + cmdline_fixed_string_t state;
> > + cmdline_fixed_string_t max_ack_win;
> > + cmdline_fixed_string_t retrans;
> > + cmdline_fixed_string_t last_win;
> > + cmdline_fixed_string_t last_seq;
> > + cmdline_fixed_string_t last_ack;
> > + cmdline_fixed_string_t last_end;
> > + cmdline_fixed_string_t last_index;
> > + uint8_t stat;
> > + uint8_t factor;
> > + uint16_t peer_port;
> > + uint32_t is_original;
> > + uint32_t en;
> > + uint32_t is_live;
> > + uint32_t s_ack;
> > + uint32_t c_ack;
> > + uint32_t ld;
> > + uint32_t lb;
> > + uint8_t re_num;
> > + uint8_t li;
> > + uint16_t lw;
> > + uint32_t ls;
> > + uint32_t la;
> > + uint32_t le;
> Why not use full names?
>
> > +};
> > +
> > +cmdline_parse_token_string_t cmd_set_conntrack_set =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + set, "set");
> > +cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + conntrack, "conntrack");
> > +cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + common, "com");
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + peer, "peer");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + peer_port, RTE_UINT16);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + is_orig, "is_orig");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + is_original, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + enable, "enable");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + en, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_live =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + live, "live");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + is_live, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + sack, "sack");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + s_ack, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + cack, "cack");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + c_ack, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_dir, "last_dir");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + ld, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + liberal, "liberal");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + lb, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_state =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + state, "state");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + stat, RTE_UINT8);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin
> =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + max_ack_win, "max_ack_win");
> > +cmdline_parse_token_num_t
> > cmd_set_conntrack_common_max_ackwin_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + factor, RTE_UINT8);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + retrans, "r_lim");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + re_num, RTE_UINT8);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_win, "last_win");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + lw, RTE_UINT16);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_seq, "last_seq");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + ls, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_ack, "last_ack");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + la, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_end, "last_end");
> > +cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + le, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_common_last_index
> =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_set_conntrack_common_result,
> > + last_index, "last_index");
> > +cmdline_parse_token_num_t
> cmd_set_conntrack_common_last_index_value
> > =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
> > + li, RTE_UINT8);
> > +
> > +static void cmd_set_conntrack_common_parsed(void *parsed_result,
> > + __rte_unused struct cmdline *cl,
> > + __rte_unused void *data)
> > +{
> > + struct cmd_set_conntrack_common_result *res = parsed_result;
> > +
> > + /* No need to swap to big endian. */
> > + conntrack_context.peer_port = res->peer_port;
> > + conntrack_context.is_original_dir = res->is_original;
> > + conntrack_context.enable = res->en;
> > + conntrack_context.live_connection = res->is_live;
> > + conntrack_context.selective_ack = res->s_ack;
> > + conntrack_context.challenge_ack_passed = res->c_ack;
> > + conntrack_context.last_direction = res->ld;
> > + conntrack_context.liberal_mode = res->lb;
> > + conntrack_context.state = (enum rte_flow_conntrack_state)res-
> >stat;
> > + conntrack_context.max_ack_window = res->factor;
> > + conntrack_context.retransmission_limit = res->re_num;
> > + conntrack_context.last_window = res->lw;
> > + conntrack_context.last_index =
> > + (enum rte_flow_conntrack_tcp_last_index)res->li;
> > + conntrack_context.last_seq = res->ls;
> > + conntrack_context.last_ack = res->la;
> > + conntrack_context.last_end = res->le; }
> > +
> > +cmdline_parse_inst_t cmd_set_conntrack_common = {
> > + .f = cmd_set_conntrack_common_parsed,
> > + .data = NULL,
> > + .help_str = "set conntrack com peer <port_id> is_orig <dir>
> enable
> > <en>"
> > + " live <ack_seen> sack <en> cack <passed> last_dir
> <dir>"
> > + " liberal <en> state <s> max_ack_win <factor> r_lim
> <num>"
> > + " last_win <win> last_seq <seq> last_ack <ack> last_end
> > <end>"
> > + " last_index <flag>",
> > + .tokens = {
> > + (void *)&cmd_set_conntrack_set,
> > + (void *)&cmd_set_conntrack_conntrack,
> > + (void *)&cmd_set_conntrack_common_peer,
> > + (void *)&cmd_set_conntrack_common_peer_value,
> > + (void *)&cmd_set_conntrack_common_is_orig,
> > + (void *)&cmd_set_conntrack_common_is_orig_value,
> > + (void *)&cmd_set_conntrack_common_enable,
> > + (void *)&cmd_set_conntrack_common_enable_value,
> > + (void *)&cmd_set_conntrack_common_live,
> > + (void *)&cmd_set_conntrack_common_live_value,
> > + (void *)&cmd_set_conntrack_common_sack,
> > + (void *)&cmd_set_conntrack_common_sack_value,
> > + (void *)&cmd_set_conntrack_common_cack,
> > + (void *)&cmd_set_conntrack_common_cack_value,
> > + (void *)&cmd_set_conntrack_common_last_dir,
> > + (void *)&cmd_set_conntrack_common_last_dir_value,
> > + (void *)&cmd_set_conntrack_common_liberal,
> > + (void *)&cmd_set_conntrack_common_liberal_value,
> > + (void *)&cmd_set_conntrack_common_state,
> > + (void *)&cmd_set_conntrack_common_state_value,
> > + (void *)&cmd_set_conntrack_common_max_ackwin,
> > + (void *)&cmd_set_conntrack_common_max_ackwin_value,
> > + (void *)&cmd_set_conntrack_common_retrans,
> > + (void *)&cmd_set_conntrack_common_retrans_value,
> > + (void *)&cmd_set_conntrack_common_last_win,
> > + (void *)&cmd_set_conntrack_common_last_win_value,
> > + (void *)&cmd_set_conntrack_common_last_seq,
> > + (void *)&cmd_set_conntrack_common_last_seq_value,
> > + (void *)&cmd_set_conntrack_common_last_ack,
> > + (void *)&cmd_set_conntrack_common_last_ack_value,
> > + (void *)&cmd_set_conntrack_common_last_end,
> > + (void *)&cmd_set_conntrack_common_last_end_value,
> > + (void *)&cmd_set_conntrack_common_last_index,
> > + (void *)&cmd_set_conntrack_common_last_index_value,
> > + NULL,
> > + },
> > +};
> > +
> > +/** Set connection tracking object both directions' details */
> struct
> > +cmd_set_conntrack_dir_result {
> > + cmdline_fixed_string_t set;
> > + cmdline_fixed_string_t conntrack;
> > + cmdline_fixed_string_t dir;
> > + cmdline_fixed_string_t scale;
> > + cmdline_fixed_string_t fin;
> > + cmdline_fixed_string_t ack_seen;
> > + cmdline_fixed_string_t unack;
> > + cmdline_fixed_string_t sent_end;
> > + cmdline_fixed_string_t reply_end;
> > + cmdline_fixed_string_t max_win;
> > + cmdline_fixed_string_t max_ack;
> > + uint32_t factor;
> > + uint32_t f;
> > + uint32_t as;
> > + uint32_t un;
> > + uint32_t se;
> > + uint32_t re;
> > + uint32_t mw;
> > + uint32_t ma;
> > +};
> > +
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + set, "set");
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + conntrack, "conntrack");
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + dir, "orig#rply");
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + scale, "scale");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + factor, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + fin, "fin");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + f, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + ack_seen, "acked");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + as, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + unack, "unack_data");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value
> =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + un, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + sent_end, "sent_end");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + se, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + reply_end, "reply_end");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + re, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + max_win, "max_win");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + mw, RTE_UINT32);
> > +cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
> > + TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + max_ack, "max_ack");
> > +cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
> > + TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
> > + ma, RTE_UINT32);
> > +
> > +static void cmd_set_conntrack_dir_parsed(void *parsed_result,
> > + __rte_unused struct cmdline *cl,
> > + __rte_unused void *data)
> > +{
> > + struct cmd_set_conntrack_dir_result *res = parsed_result;
> > + struct rte_flow_tcp_dir_param *dir = NULL;
> > +
> > + if (strcmp(res->dir, "orig") == 0)
> > + dir = &conntrack_context.original_dir;
> > + else if (strcmp(res->dir, "rply") == 0)
> > + dir = &conntrack_context.reply_dir;
> > + else
> > + return;
> > + dir->scale = res->factor;
> > + dir->close_initiated = res->f;
> > + dir->last_ack_seen = res->as;
> > + dir->data_unacked = res->un;
> > + dir->sent_end = res->se;
> > + dir->reply_end = res->re;
> > + dir->max_ack = res->ma;
> > + dir->max_win = res->mw;
> > +}
> > +
> > +cmdline_parse_inst_t cmd_set_conntrack_dir = {
> > + .f = cmd_set_conntrack_dir_parsed,
> > + .data = NULL,
> > + .help_str = "set conntrack orig|rply scale <factor> fin
> <sent>"
> > + " acked <seen> unack_data <unack> sent_end <sent>"
> > + " reply_end <reply> max_win <win> max_ack <ack>",
> > + .tokens = {
> > + (void *)&cmd_set_conntrack_set,
> > + (void *)&cmd_set_conntrack_conntrack,
> > + (void *)&cmd_set_conntrack_dir_dir,
> > + (void *)&cmd_set_conntrack_dir_scale,
> > + (void *)&cmd_set_conntrack_dir_scale_value,
> > + (void *)&cmd_set_conntrack_dir_fin,
> > + (void *)&cmd_set_conntrack_dir_fin_value,
> > + (void *)&cmd_set_conntrack_dir_ack,
> > + (void *)&cmd_set_conntrack_dir_ack_value,
> > + (void *)&cmd_set_conntrack_dir_unack_data,
> > + (void *)&cmd_set_conntrack_dir_unack_data_value,
> > + (void *)&cmd_set_conntrack_dir_sent_end,
> > + (void *)&cmd_set_conntrack_dir_sent_end_value,
> > + (void *)&cmd_set_conntrack_dir_reply_end,
> > + (void *)&cmd_set_conntrack_dir_reply_end_value,
> > + (void *)&cmd_set_conntrack_dir_max_win,
> > + (void *)&cmd_set_conntrack_dir_max_win_value,
> > + (void *)&cmd_set_conntrack_dir_max_ack,
> > + (void *)&cmd_set_conntrack_dir_max_ack_value,
> > + NULL,
> > + },
> > +};
> > +
> > /* Strict link priority scheduling mode setting */ static void
> > cmd_strict_link_prio_parsed( @@ -17117,6 +17469,8 @@
> > cmdline_parse_ctx_t main_ctx[] = {
> > (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
> > (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
> > (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
> > + (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
> > + (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
> > (cmdline_parse_inst_t *)&cmd_ddp_add,
> > (cmdline_parse_inst_t *)&cmd_ddp_del,
> > (cmdline_parse_inst_t *)&cmd_ddp_get_list, diff --git
> > a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index
> > d83dec942a..fc5e31be5e 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -289,6 +289,7 @@ enum index {
> > ITEM_GENEVE_OPT_TYPE,
> > ITEM_GENEVE_OPT_LENGTH,
> > ITEM_GENEVE_OPT_DATA,
> > + ITEM_CONNTRACK,
> >
> > /* Validate/create actions. */
> > ACTIONS,
> > @@ -427,6 +428,10 @@ enum index {
> > ACTION_MODIFY_FIELD_SRC_OFFSET,
> > ACTION_MODIFY_FIELD_SRC_VALUE,
> > ACTION_MODIFY_FIELD_WIDTH,
> > + ACTION_CONNTRACK,
> > + ACTION_CONNTRACK_UPDATE,
> > + ACTION_CONNTRACK_UPDATE_DIR,
> > + ACTION_CONNTRACK_UPDATE_CTX,
> > };
> >
> > /** Maximum size for pattern in struct rte_flow_item_raw. */ @@
> > -565,6 +570,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
> >
> > struct mplsoudp_decap_conf mplsoudp_decap_conf;
> >
> > +struct rte_flow_action_conntrack conntrack_context;
> > +
> > #define ACTION_SAMPLE_ACTIONS_NUM 10
> > #define RAW_SAMPLE_CONFS_MAX_NUM 8
> > /** Storage for struct rte_flow_action_sample including external
> > data. */ @@ -956,6 +963,7 @@ static const enum index next_item[] =
> {
> > ITEM_PFCP,
> > ITEM_ECPRI,
> > ITEM_GENEVE_OPT,
> > + ITEM_CONNTRACK,
> > END_SET,
> > ZERO,
> > };
> > @@ -1370,6 +1378,8 @@ static const enum index next_action[] = {
> > ACTION_SAMPLE,
> > ACTION_INDIRECT,
> > ACTION_MODIFY_FIELD,
> > + ACTION_CONNTRACK,
> > + ACTION_CONNTRACK_UPDATE,
> > ZERO,
> > };
> >
> > @@ -1638,6 +1648,13 @@ static const enum index
> > action_modify_field_src[] = {
> > ZERO,
> > };
> >
> > +static const enum index action_update_conntrack[] = {
> > + ACTION_CONNTRACK_UPDATE_DIR,
> > + ACTION_CONNTRACK_UPDATE_CTX,
> > + ACTION_NEXT,
> > + ZERO,
> > +};
> > +
> > static int parse_set_raw_encap_decap(struct context *, const
> struct token *,
> > const char *, unsigned int,
> > void *, unsigned int);
> > @@ -1728,6 +1745,10 @@ static int
> > parse_vc_modify_field_id(struct context *ctx, const struct token
> *token,
> > const char *str, unsigned int len, void *buf,
> > unsigned int size);
> > +static int
> > +parse_vc_action_conntrack_update(struct context *ctx, const
> struct
> > +token
> > *token,
> > + const char *str, unsigned int len, void *buf,
> > + unsigned int size);
> > static int parse_destroy(struct context *, const struct token *,
> > const char *, unsigned int,
> > void *, unsigned int);
> > @@ -3373,6 +3394,13 @@ static const struct token token_list[] = {
> > (sizeof(struct rte_flow_item_geneve_opt),
> > ITEM_GENEVE_OPT_DATA_SIZE)),
> > },
> > + [ITEM_CONNTRACK] = {
> > + .name = "conntrack",
> > + .help = "conntrack state",
> > + .next = NEXT(NEXT_ENTRY(ITEM_NEXT),
> > NEXT_ENTRY(UNSIGNED),
> > + item_param),
> > + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack,
> > flags)),
> > + },
> > /* Validate/create actions. */
> > [ACTIONS] = {
> > .name = "actions",
> > @@ -4471,6 +4499,34 @@ static const struct token token_list[] = {
> > .call = parse_vc_action_sample_index,
> > .comp = comp_set_sample_index,
> > },
> > + [ACTION_CONNTRACK] = {
> > + .name = "conntrack",
> > + .help = "create a conntrack object",
> > + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> > + .priv = PRIV_ACTION(CONNTRACK,
> > + sizeof(struct
> rte_flow_action_conntrack)),
> > + .call = parse_vc,
> > + },
> > + [ACTION_CONNTRACK_UPDATE] = {
> > + .name = "conntrack_update",
> > + .help = "update a conntrack object",
> > + .next = NEXT(action_update_conntrack),
> > + .priv = PRIV_ACTION(CONNTRACK,
> > + sizeof(struct
> rte_flow_modify_conntrack)),
> > + .call = parse_vc,
> > + },
> > + [ACTION_CONNTRACK_UPDATE_DIR] = {
> > + .name = "dir",
> > + .help = "update a conntrack object direction",
> > + .next = NEXT(action_update_conntrack),
> > + .call = parse_vc_action_conntrack_update,
> > + },
> > + [ACTION_CONNTRACK_UPDATE_CTX] = {
> > + .name = "ctx",
> > + .help = "update a conntrack object context",
> > + .next = NEXT(action_update_conntrack),
> > + .call = parse_vc_action_conntrack_update,
> > + },
> > /* Indirect action destroy arguments. */
> > [INDIRECT_ACTION_DESTROY_ID] = {
> > .name = "action_id",
> > @@ -6277,6 +6333,42 @@ parse_vc_modify_field_id(struct context
> *ctx,
> > const struct token *token,
> > return len;
> > }
> >
> > +/** Parse the conntrack update, not a rte_flow_action. */ static
> int
> > +parse_vc_action_conntrack_update(struct context *ctx, const
> struct
> > +token
> > *token,
> > + const char *str, unsigned int len, void *buf,
> > + unsigned int size)
> > +{
> > + struct buffer *out = buf;
> > + struct rte_flow_modify_conntrack *ct_modify = NULL;
> > +
> > + (void)size;
> > + if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
> > + ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
> > + return -1;
> > + /* Token name must match. */
> > + if (parse_default(ctx, token, str, len, NULL, 0) < 0)
> > + return -1;
> > + ct_modify = (struct rte_flow_modify_conntrack *)out-
> >args.vc.data;
> > + /* Nothing else to do if there is no buffer. */
> > + if (!out)
> > + return len;
> > + if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
> > + ct_modify->new_ct.is_original_dir =
> > + conntrack_context.is_original_dir;
> > + ct_modify->direction = 1;
> > + } else {
> > + uint32_t old_dir;
> > +
> > + old_dir = ct_modify->new_ct.is_original_dir;
> > + memcpy(&ct_modify->new_ct, &conntrack_context,
> > + sizeof(conntrack_context));
> > + ct_modify->new_ct.is_original_dir = old_dir;
> > + ct_modify->state = 1;
> > + }
> > + return len;
> > +}
> > +
> > /** Parse tokens for destroy command. */ static int
> > parse_destroy(struct context *ctx, const struct token *token, diff
> > --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > 1eec0612a4..06143a7501 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -1483,6 +1483,11 @@ port_action_handle_create(portid_t port_id,
> > uint32_t id,
> >
> > pia->age_type =
> > ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
> > age->context = &pia->age_type;
> > + } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
> > + struct rte_flow_action_conntrack *ct =
> > + (struct rte_flow_action_conntrack *)(uintptr_t)(action-
> >conf);
> > +
> > + memcpy(ct, &conntrack_context, sizeof(*ct));
> > }
> > /* Poisoning to make sure PMDs update it in case of error. */
> > memset(&error, 0x22, sizeof(error)); @@ -1564,11 +1569,24 @@
> > port_action_handle_update(portid_t port_id, uint32_t id, {
> > struct rte_flow_error error;
> > struct rte_flow_action_handle *action_handle;
> > + struct port_indirect_action *pia;
> > + const void *update;
> >
> > action_handle = port_action_handle_get_by_id(port_id, id);
> > if (!action_handle)
> > return -EINVAL;
> > - if (rte_flow_action_handle_update(port_id, action_handle,
> action,
> > + pia = action_get_by_id(port_id, id);
> > + if (!pia)
> > + return -EINVAL;
> > + switch (pia->type) {
> > + case RTE_FLOW_ACTION_TYPE_CONNTRACK:
> > + update = action->conf;
> > + break;
> > + default:
> > + update = action;
> > + break;
> > + }
> > + if (rte_flow_action_handle_update(port_id, action_handle,
> update,
> > &error)) {
> > return port_flow_complain(&error);
> > }
> > @@ -1621,6 +1639,51 @@ port_action_handle_query(portid_t port_id,
> > uint32_t id)
> > }
> > data = NULL;
> > break;
> > + case RTE_FLOW_ACTION_TYPE_CONNTRACK:
> > + if (!ret) {
> > + struct rte_flow_action_conntrack *ct = data;
> > +
> > + printf("Conntrack Context:\n"
> > + " Peer: %u, Flow dir: %s, Enable: %u\n"
> > + " Live: %u, SACK: %u, CACK: %u\n"
> > + " Packet dir: %s, Liberal: %u,
> State: %u\n"
> > + " Factor: %u, Retrans: %u, TCP
> flags: %u\n"
> > + " Last Seq: %u, Last ACK: %u\n"
> > + " Last Win: %u, Last End: %u\n",
> > + ct->peer_port,
> > + ct->is_original_dir ? "Original" : "Reply",
> > + ct->enable, ct->live_connection,
> > + ct->selective_ack, ct->challenge_ack_passed,
> > + ct->last_direction ? "Original" : "Reply",
> > + ct->liberal_mode, ct->state,
> > + ct->max_ack_window, ct-
> >retransmission_limit,
> > + ct->last_index, ct->last_seq, ct->last_ack,
> > + ct->last_window, ct->last_end);
> > + printf(" Original Dir:\n"
> > + " scale: %u, fin: %u, ack seen: %u\n"
> > + " unacked data: %u\n Sent end: %u,"
> > + " Reply end: %u, Max win: %u, Max
> ACK: %u\n",
> > + ct->original_dir.scale,
> > + ct->original_dir.close_initiated,
> > + ct->original_dir.last_ack_seen,
> > + ct->original_dir.data_unacked,
> > + ct->original_dir.sent_end,
> > + ct->original_dir.reply_end,
> > + ct->original_dir.max_win,
> > + ct->original_dir.max_ack);
> > + printf(" Reply Dir:\n"
> > + " scale: %u, fin: %u, ack seen: %u\n"
> > + " unacked data: %u\n Sent end: %u,"
> > + " Reply end: %u, Max win: %u, Max
> ACK: %u\n",
> > + ct->reply_dir.scale,
> > + ct->reply_dir.close_initiated,
> > + ct->reply_dir.last_ack_seen,
> > + ct->reply_dir.data_unacked,
> > + ct->reply_dir.sent_end, ct-
> >reply_dir.reply_end,
> > + ct->reply_dir.max_win, ct-
> >reply_dir.max_ack);
> > + }
> > + data = NULL;
> > + break;
> > default:
> > printf("Indirect action %u (type: %d) on port %u
> doesn't"
> > " support query\n", id, pia->type, port_id); diff
> --git
> > a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > d1eaaadb17..d7528f9cb5 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf
> > mplsoudp_decap_conf;
> >
> > extern enum rte_eth_rx_mq_mode rx_mq_mode;
> >
> > +extern struct rte_flow_action_conntrack conntrack_context;
> > +
> > static inline unsigned int
> > lcore_num(void)
> > {
> > --
> > 2.19.0.windows.1
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] doc: update for conntrack
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 3/3] doc: update " Bing Zhao
@ 2021-04-16 18:22 ` Thomas Monjalon
2021-04-16 18:30 ` Ajit Khaparde
1 sibling, 0 replies; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-16 18:22 UTC (permalink / raw)
To: Bing Zhao
Cc: orika, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
Doc should be added with the code.
16/04/2021 19:54, Bing Zhao:
> The updated documentations include:
> 1. Release notes
> 2. rte_flow.rst
1 & 2 can go in ethdev patch
> 3. testpmd user guide
3 can go in testpmd patch.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] doc: update for conntrack
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 3/3] doc: update " Bing Zhao
2021-04-16 18:22 ` Thomas Monjalon
@ 2021-04-16 18:30 ` Ajit Khaparde
2021-04-19 17:28 ` Bing Zhao
1 sibling, 1 reply; 45+ messages in thread
From: Ajit Khaparde @ 2021-04-16 18:30 UTC (permalink / raw)
To: Bing Zhao
Cc: Ori Kam, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
dpdk-dev, Xiaoyun Li
[-- Attachment #1: Type: text/plain, Size: 12421 bytes --]
On Fri, Apr 16, 2021 at 10:54 AM Bing Zhao <bingz@nvidia.com> wrote:
>
> The updated documentations include:
> 1. Release notes
> 2. rte_flow.rst
> 3. testpmd user guide
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 113 ++++++++++++++++++++
> doc/guides/rel_notes/release_21_05.rst | 4 +
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++++++
> 3 files changed, 152 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 2ecc48cfff..a1333819fc 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,14 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``CONNTRACK``
> +^^^^^^^^^^^^^^^^^^^
> +
> +Matches a conntrack state after conntrack action.
> +
> +- ``flags``: conntrack packet state flags.
> +- Default ``mask`` matches all state bits.
> +
> Actions
> ~~~~~~~
>
> @@ -2842,6 +2850,111 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
> | ``value`` | immediate value or a pointer to this value |
> +---------------+----------------------------------------------------------+
>
> +Action: ``CONNTRACK``
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +Create a conntrack (connection tracking) context with the provided information.
> +
> +In stateful session like TCP, the conntrack action provides the ability to
> +examine every packet of this connection and associate the state to every
> +packet. It will help to realize the stateful offloading with little software
s/stateful offloading/stateful offload of connections
> +participation. For example, only the control packets like SYN / FIN or packets
> +with invalid state should be handled by the software.
s/invalid state should be handled by the software/invalid state may be
handled by the software while the rest of the control frames may be
handled in hardware.
> +
> +A conntrack context should be created via ``rte_flow_action_handle_create()``
> +before using. Then the handle with ``INDIRECT`` type is used for a flow rule
> +creation. If a flow rule with an opposite direction needs to be created, the
> +``rte_flow_action_handle_update()`` should be used to modify the direction.
> +
> +Not all the fields of the ``struct rte_flow_action_conntrack`` will be used
> +for a conntrack context creating, depending on the HW.
s/context creating/context creation.
s/depending on the HW./This capability will depend on the underlying hardware
> +The ``struct rte_flow_modify_conntrack`` should be used for an updating.
> +
> +The current conntrack context information could be queried via the
> +``rte_flow_action_handle_query()`` interface.
> +
> +.. _table_rte_flow_action_conntrack:
> +
> +.. table:: CONNTRACK
> +
> + +--------------------------+-------------------------------------------------------------+
> + | Field | Value |
> + +==========================+=============================================================+
> + | ``peer_port`` | peer port number |
> + +--------------------------+-------------------------------------------------------------+
> + | ``is_original_dir`` | direction of this connection for flow rule creating |
s/for flow rule creating/for creating flow rule
> + +--------------------------+-------------------------------------------------------------+
> + | ``enable`` | enable the conntrack context |
> + +--------------------------+-------------------------------------------------------------+
> + | ``live_connection`` | one ack was seen for this connection |
> + +--------------------------+-------------------------------------------------------------+
> + | ``selective_ack`` | SACK enabled |
> + +--------------------------+-------------------------------------------------------------+
> + | ``challenge_ack_passed`` | a challenge ack has passed |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_direction`` | direction of the last passed packet |
> + +--------------------------+-------------------------------------------------------------+
> + | ``liberal_mode`` | only report state change |
> + +--------------------------+-------------------------------------------------------------+
> + | ``state`` | current state |
> + +--------------------------+-------------------------------------------------------------+
> + | ``max_ack_window`` | maximal window scaling factor |
> + +--------------------------+-------------------------------------------------------------+
> + | ``retransmission_limit`` | maximal retransmission times |
s/times/limit
> + +--------------------------+-------------------------------------------------------------+
> + | ``original_dir`` | TCP parameters of the original direction |
> + +--------------------------+-------------------------------------------------------------+
> + | ``reply_dir`` | TCP parameters of the reply direction |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_window`` | window value of the last passed packet |
s/value/size
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_seq`` | sequence value of the last passed packet |
s/value/number
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_ack`` | acknowledgement value the last passed packet |
s/value/number
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_end`` | sum acknowledgement and length value the last passed packet |
sum of ack number and length of the last passed packet
or
sum of acknowledgement number and length of the last passed packet
> + +--------------------------+-------------------------------------------------------------+
> +
> +.. _table_rte_flow_tcp_dir_param:
> +
> +.. table:: configuration parameters for each direction
> +
> + +---------------------+---------------------------------------------------------+
> + | Field | Value |
> + +=====================+=========================================================+
> + | ``scale`` | TCP window scaling factor |
> + +---------------------+---------------------------------------------------------+
> + | ``close_initiated`` | FIN sent from this direction |
> + +---------------------+---------------------------------------------------------+
> + | ``last_ack_seen`` | an ACK packet received |
> + +---------------------+---------------------------------------------------------+
> + | ``data_unacked`` | unacknowledged data for packets from this direction |
> + +---------------------+---------------------------------------------------------+
> + | ``sent_end`` | max{seq + len} seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> + | ``reply_end`` | max{sack + max{win, 1}} seen in reply packets |
> + +---------------------+---------------------------------------------------------+
> + | ``max_win`` | max{max{win, 1}} + {sack - ack} seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> + | ``max_ack`` | max{ack} + seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> +
> +.. _table_rte_flow_modify_conntrack:
> +
> +.. table:: update a conntrack context
> +
> + +----------------+---------------------------------------+
> + | Field | Value |
> + +================+=======================================+
> + | ``new_ct`` | new conntrack information |
> + +----------------+---------------------------------------+
> + | ``direction`` | direction will be updated |
> + +----------------+---------------------------------------+
> + | ``state`` | other fields except will be updated |
except what?
direction??
> + +----------------+---------------------------------------+
> + | ``reserved`` | reserved bits |
> + +----------------+---------------------------------------+
> +
> Negative types
> ~~~~~~~~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
> index e6f99350af..824eb72981 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -183,6 +183,10 @@ New Features
> the events across multiple stages.
> * This also reduced the scheduling overhead on a event device.
>
> +* **Added conntrack support for rte_flow.**
> +
> + * Added conntrack action and item for stateful offloading.
> +
> * **Updated testpmd.**
>
> * Added a command line option to configure forced speed for Ethernet port.
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 1fa6e2000e..4c029776aa 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -3791,6 +3791,8 @@ This section lists supported pattern items and their attributes, if any.
> - ``s_field {unsigned}``: S field.
> - ``seid {unsigned}``: session endpoint identifier.
>
> +- ``conntrack``: match conntrack state.
> +
> Actions list
> ^^^^^^^^^^^^
>
> @@ -4925,6 +4927,39 @@ NVGRE encapsulation header and sent to port id 0.
> testpmd> flow create 0 ingress transfer pattern eth / end actions
> sample ratio 1 index 0 / port_id id 2 / end
>
> +Sample conntrack rules
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +Conntrack rules can be set by the following commands
> +
> +Need to construct the connection context with provided information.
> +In the first table, create a flow rule by using conntrack action and jump to
> +the next table. In the next table, create a rule to check the state.
> +
> +::
> +
> + testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0
> + last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510
> + last_seq 2632987379 last_ack 2532480967 last_end 2632987379
> + last_index 0x8
> + testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
> + sent_end 2632987379 reply_end 2633016339 max_win 28960
> + max_ack 2632987379
> + testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
> + sent_end 2532480967 reply_end 2532546247 max_win 65280
> + max_ack 2532480967
> + testpmd> flow indirect_action 0 create ingress action conntrack / end
> + testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp / end actions indirect 0 / jump group 5 / end
> + testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack is 1 / end actions queue index 5 / end
> +
> +Construct the conntrack again with only "is_orig" set to 0 (other fields are
> +ignored), then use "update" interface to update the direction. Create flow
s/use/use the
> +rules like above for the peer port.
By peer, do you mean peer system? Or remote/dst port of the TCP connection?
> +
> +::
> +
> + testpmd> flow indirect_action 0 update 0 action conntrack_update dir / end
> +
> BPF Functions
> --------------
>
> --
> 2.19.0.windows.1
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
@ 2021-04-16 18:30 ` Ajit Khaparde
2021-04-19 14:08 ` Thomas Monjalon
2021-04-19 14:06 ` Thomas Monjalon
1 sibling, 1 reply; 45+ messages in thread
From: Ajit Khaparde @ 2021-04-16 18:30 UTC (permalink / raw)
To: Bing Zhao
Cc: Ori Kam, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
dpdk-dev, Xiaoyun Li
[-- Attachment #1: Type: text/plain, Size: 12458 bytes --]
On Fri, Apr 16, 2021 at 10:54 AM Bing Zhao <bingz@nvidia.com> wrote:
>
> This commit introduces the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow rule the application should
> add the conntrack action and jump to the next flow table. In the
> following flow rule(s) of the next table, the application should use
> the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
s/has two directions traffic/can have traffic in two directions.
> action context correctly, the information of packets from both
> directions are required.
>
> The conntrack action should be created on one ethdev port and supply
> the peer ethdev port as a parameter to the action. After context
> created, it could only be used between these two ethdev ports
> (dual-port mode) or a single port. The application should modify the
> action via the API "rte_action_handle_update" only when before using
> it to create a flow rule with conntrack conntrack for the opposite
> direction. This will help the driver to recognize the direction of
> the flow to be created, especially in the single-port mode, in which
> case the traffic from both directions will go through the same
> ethdev port if the application works as an "forwarding engine" but
> not an end point. There is no need to call the update interface if
> the subsequent flow rules have nothing to be changed.
>
> Query will be supported via "rte_action_handle_query" interface,
> about the current packets information and connection status. The
> fields query capabilities depends on the HW.
How about this:
The fields which can be queried will depend on the HW capabilities.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to make sure the conntrack module
> works correctly without missing any packet. Only the valid packets
> should pass the conntrack, packets with invalid TCP information,
> like out of window, or with invalid header, like malformed, should
> not pass.
>
> Naming and definition:
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/netfilter/nf_conntrack_tcp.h
> https://elixir.bootlin.com/linux/latest/source/net/netfilter/nf_conntrack_proto_tcp.c
>
> Other reference:
> https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.c | 2 +
> lib/librte_ethdev/rte_flow.h | 207 +++++++++++++++++++++++++++++++++++
> 2 files changed, 209 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
> index 0d2610b7c4..c7c7108933 100644
> --- a/lib/librte_ethdev/rte_flow.c
> +++ b/lib/librte_ethdev/rte_flow.c
> @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
> MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
> MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
> MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
> + MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
> };
>
> /** Generate flow_action[] entry. */
> @@ -186,6 +187,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
> * indirect action handle.
> */
> MK_FLOW_ACTION(INDIRECT, 0),
> + MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
> };
>
> int
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 324d00abdc..c9d7bdfa57 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches conntrack state.
> + *
> + * @see struct rte_flow_item_conntrack.
> + */
> + RTE_FLOW_ITEM_TYPE_CONNTRACK,
> };
>
> /**
> @@ -1685,6 +1694,51 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +/**
> + * The packet is valid after conntrack checking.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
> +/**
> + * The state of the connection is changed.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
> +/**
> + * Error is detected on this packet for this connection and
> + * an invalid state is set.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
> +/**
> + * The packet contains some bad field(s) and cannot continue
> + * with the conntrack module checking.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_CONNTRACK
> + *
> + * Matches the state of a packet after it passed the connection tracking
> + * examination. The state is a bitmap of one RTE_FLOW_CONNTRACK_PKT_STATE*
> + * or a reasonable combination of these bits.
> + */
> +struct rte_flow_item_conntrack {
> + uint32_t flags;
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
> + .flags = 0xffffffff,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> @@ -2277,6 +2331,15 @@ enum rte_flow_action_type {
> * same port or across different ports.
> */
> RTE_FLOW_ACTION_TYPE_INDIRECT,
> +
> + /**
> + * [META]
> + *
> + * Enable tracking a TCP connection state.
> + *
> + * @see struct rte_flow_action_conntrack.
> + */
> + RTE_FLOW_ACTION_TYPE_CONNTRACK,
> };
>
> /**
> @@ -2875,6 +2938,150 @@ struct rte_flow_action_set_dscp {
> */
> struct rte_flow_action_handle;
>
> +/**
> + * The state of a TCP connection.
> + */
> +enum rte_flow_conntrack_state {
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< 3-way handshake was done. */
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< Second FIN was received, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< Second FIN was ACKed, connection was closed. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> +};
> +
> +/**
> + * The last passed TCP packet flags of a connection.
> + */
> +enum rte_flow_conntrack_tcp_last_index {
> + RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYN = RTE_BIT32(0), /**< With SYN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_SYNACK = RTE_BIT32(1), /**< With SYNACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_FIN = RTE_BIT32(2), /**< With FIN flag. */
> + RTE_FLOW_CONNTRACK_FLAG_ACK = RTE_BIT32(3), /**< With ACK flag. */
> + RTE_FLOW_CONNTRACK_FLAG_RST = RTE_BIT32(4), /**< With RST flag. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * Configuration parameters for each direction of a TCP connection.
> + */
> +struct rte_flow_tcp_dir_param {
> + /** TCP window scaling factor, 0xF to disable. */
> + uint32_t scale:4;
> + /** The FIN was sent by this direction. */
> + uint32_t close_initiated:1;
> + /** An ACK packet has been received by this side. */
> + uint32_t last_ack_seen:1;
> + /**
> + * If set, it indicates that there is unacknowledged data for the
> + * packets sent from this direction.
> + */
> + uint32_t data_unacked:1;
> + /**
> + * Maximal value of sequence + payload length in sent
> + * packets (next ACK from the opposite direction).
> + */
> + uint32_t sent_end;
> + /**
> + * Maximal value of (ACK + window size) in received packet + length
> + * over sent packet (maximal sequence could be sent).
> + */
> + uint32_t reply_end;
> + /** Maximal value of actual window size in sent packets. */
> + uint32_t max_win;
> + /** Maximal value of ACK in sent packets. */
> + uint32_t max_ack;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Configuration and initial state for the connection tracking module.
> + * This structure could be used for both setting and query.
> + */
> +struct rte_flow_action_conntrack {
> + /** The peer port number, can be the same port. */
> + uint16_t peer_port;
> + /**
> + * Direction of this connection when creating a flow, the value
> + * only affects the subsequent flows creation.
s/flows/flow
or
s/the subsequent flows creation/the creation of subsequent flows
> + */
> + uint32_t is_original_dir:1;
> + /**
> + * Enable / disable the conntrack HW module. When disabled, the
> + * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
> + * In this state the HW will act as passthrough.
> + * It only affects this conntrack object in the HW without any effect
> + * to the other objects.
> + */
> + uint32_t enable:1;
> + /** At least one ack was seen after the connection was established. */
> + uint32_t live_connection:1;
> + /** Enable selective ACK on this connection. */
> + uint32_t selective_ack:1;
> + /** A challenge ack has passed. */
> + uint32_t challenge_ack_passed:1;
> + /**
> + * 1: The last packet is seen from the original direction.
> + * 0: The last packet is seen from the reply direction.
> + */
> + uint32_t last_direction:1;
> + /** No TCP check will be done except the state change. */
> + uint32_t liberal_mode:1;
> + /**<The current state of this connection. */
> + enum rte_flow_conntrack_state state;
> + /** Scaling factor for maximal allowed ACK window. */
> + uint8_t max_ack_window;
> + /** Maximal allowed number of retransmission times. */
s/times/limit
> + uint8_t retransmission_limit;
> + /** TCP parameters of the original direction. */
> + struct rte_flow_tcp_dir_param original_dir;
> + /** TCP parameters of the reply direction. */
> + struct rte_flow_tcp_dir_param reply_dir;
> + /** The window value of the last packet passed this conntrack. */
s/value/size
> + uint16_t last_window;
> + enum rte_flow_conntrack_tcp_last_index last_index;
> + /** The sequence of the last packet passed this conntrack. */
sequence number of the ...
> + uint32_t last_seq;
> + /** The acknowledgement of the last packet passed this conntrack. */
ACK number of the..
s/passed this/passed by this
or
passing this
> + uint32_t last_ack;
> + /**
> + * The total value ACK + payload length of the last packet
> + * passed this conntrack.
s/passed this/passed by this
or passing this
> + */
> + uint32_t last_end;
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_CONNTRACK
> + *
> + * Wrapper structure for the context update interface.
> + * Ports cannot support updating, and the only valid solution is to
> + * destroy the old context and create a new one instead.
> + */
> +struct rte_flow_modify_conntrack {
> + /** New connection tracking parameters to be updated. */
> + struct rte_flow_action_conntrack new_ct;
> + /** The direction field will be updated. */
> + uint32_t direction:1;
> + /** All the other fields except direction will be updated. */
> + uint32_t state:1;
> + /** Reserved bits for the future usage. */
> + uint32_t reserved:30;
> +};
> +
> /**
> * Field IDs for MODIFY_FIELD action.
> */
> --
> 2.19.0.windows.1
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-16 18:05 ` Bing Zhao
@ 2021-04-16 21:47 ` Ajit Khaparde
2021-04-17 6:10 ` Bing Zhao
0 siblings, 1 reply; 45+ messages in thread
From: Ajit Khaparde @ 2021-04-16 21:47 UTC (permalink / raw)
To: Bing Zhao
Cc: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit,
andrew.rybchenko, dev
[-- Attachment #1: Type: text/plain, Size: 1575 bytes --]
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this structure may change without prior
> > notice
> > > + *
> > > + * Configuration parameters for each direction of a TCP
> > connection.
> > > + */
> > > +struct rte_flow_tcp_dir_param {
> > > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> > disable. */
> > > + uint32_t close_initiated:1; /**< The FIN was sent by this
> > direction. */
> > > + /**< An ACK packet has been received by this side. */
> > > + uint32_t last_ack_seen:1;
> > > + /**< If set, indicates that there is unacked data of the
> > connection. */
> > > + uint32_t data_unacked:1;
> > > + /**< Maximal value of sequence + payload length over sent
> > > + * packets (next ACK from the opposite direction).
> > > + */
> > > + uint32_t sent_end;
> > > + /**< Maximal value of (ACK + window size) over received packet
> > +
> > > length
> > > + * over sent packet (maximal sequence could be sent).
> > > + */
> > > + uint32_t reply_end;
> >
> > This comment is for all members that are part of the packet, Do you
> > think it should be in network order?
>
> Almost none of the fields are part of the packet. Indeed, most of them are calculated from the packets information. So I prefer to keep the host order easy for using and
> keep all the fields of the whole structure the same endianness format.
> What do you think?
Can you mention it in the documentation and comments?
That all the values are in host byte order and need to be converted to
network byte order if the HW needs it that way
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-16 21:47 ` Ajit Khaparde
@ 2021-04-17 6:10 ` Bing Zhao
2021-04-17 14:54 ` Ajit Khaparde
0 siblings, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-17 6:10 UTC (permalink / raw)
To: Ajit Khaparde
Cc: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit,
andrew.rybchenko, dev
Hi Ajit,
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Saturday, April 17, 2021 5:47 AM
> To: Bing Zhao <bingz@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru; dev@dpdk.org
> Subject: Re: [PATCH v2 1/2] ethdev: introduce conntrack flow action
> and item
>
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > notice
> > > > + *
> > > > + * Configuration parameters for each direction of a TCP
> > > connection.
> > > > + */
> > > > +struct rte_flow_tcp_dir_param {
> > > > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> > > disable. */
> > > > + uint32_t close_initiated:1; /**< The FIN was sent by this
> > > direction. */
> > > > + /**< An ACK packet has been received by this side. */
> > > > + uint32_t last_ack_seen:1;
> > > > + /**< If set, indicates that there is unacked data of the
> > > connection. */
> > > > + uint32_t data_unacked:1;
> > > > + /**< Maximal value of sequence + payload length over sent
> > > > + * packets (next ACK from the opposite direction).
> > > > + */
> > > > + uint32_t sent_end;
> > > > + /**< Maximal value of (ACK + window size) over received
> packet
> > > +
> > > > length
> > > > + * over sent packet (maximal sequence could be sent).
> > > > + */
> > > > + uint32_t reply_end;
> > >
> > > This comment is for all members that are part of the packet, Do
> you
> > > think it should be in network order?
> >
> > Almost none of the fields are part of the packet. Indeed, most of
> them are calculated from the packets information. So I prefer to
> keep the host order easy for using and
> > keep all the fields of the whole structure the same endianness
> format.
> > What do you think?
>
> Can you mention it in the documentation and comments?
> That all the values are in host byte order and need to be converted
> to
> network byte order if the HW needs it that way
Sure, I think it would be better to add it in the documentation.
What do you think?
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: introduce conntrack flow action and item
2021-04-17 6:10 ` Bing Zhao
@ 2021-04-17 14:54 ` Ajit Khaparde
0 siblings, 0 replies; 45+ messages in thread
From: Ajit Khaparde @ 2021-04-17 14:54 UTC (permalink / raw)
To: Bing Zhao
Cc: Ori Kam, NBU-Contact-Thomas Monjalon, ferruh.yigit,
andrew.rybchenko, dev
[-- Attachment #1: Type: text/plain, Size: 2450 bytes --]
On Fri, Apr 16, 2021 at 11:10 PM Bing Zhao <bingz@nvidia.com> wrote:
>
> Hi Ajit,
>
> > -----Original Message-----
> > From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > Sent: Saturday, April 17, 2021 5:47 AM
> > To: Bing Zhao <bingz@nvidia.com>
> > Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>; ferruh.yigit@intel.com;
> > andrew.rybchenko@oktetlabs.ru; dev@dpdk.org
> > Subject: Re: [PATCH v2 1/2] ethdev: introduce conntrack flow action
> > and item
> >
> > > > > +
> > > > > +/**
> > > > > + * @warning
> > > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > notice
> > > > > + *
> > > > > + * Configuration parameters for each direction of a TCP
> > > > connection.
> > > > > + */
> > > > > +struct rte_flow_tcp_dir_param {
> > > > > + uint32_t scale:4; /**< TCP window scaling factor, 0xF to
> > > > disable. */
> > > > > + uint32_t close_initiated:1; /**< The FIN was sent by this
> > > > direction. */
> > > > > + /**< An ACK packet has been received by this side. */
> > > > > + uint32_t last_ack_seen:1;
> > > > > + /**< If set, indicates that there is unacked data of the
> > > > connection. */
> > > > > + uint32_t data_unacked:1;
> > > > > + /**< Maximal value of sequence + payload length over sent
> > > > > + * packets (next ACK from the opposite direction).
> > > > > + */
> > > > > + uint32_t sent_end;
> > > > > + /**< Maximal value of (ACK + window size) over received
> > packet
> > > > +
> > > > > length
> > > > > + * over sent packet (maximal sequence could be sent).
> > > > > + */
> > > > > + uint32_t reply_end;
> > > >
> > > > This comment is for all members that are part of the packet, Do
> > you
> > > > think it should be in network order?
> > >
> > > Almost none of the fields are part of the packet. Indeed, most of
> > them are calculated from the packets information. So I prefer to
> > keep the host order easy for using and
> > > keep all the fields of the whole structure the same endianness
> > format.
> > > What do you think?
> >
> > Can you mention it in the documentation and comments?
> > That all the values are in host byte order and need to be converted
> > to
> > network byte order if the HW needs it that way
>
> Sure, I think it would be better to add it in the documentation.
> What do you think?
Documentation - yes.
In the comments of the structure in the header file - if possible.
>
> BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
2021-04-16 18:30 ` Ajit Khaparde
@ 2021-04-19 14:06 ` Thomas Monjalon
2021-04-19 16:13 ` Bing Zhao
1 sibling, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-19 14:06 UTC (permalink / raw)
To: Bing Zhao
Cc: orika, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
16/04/2021 19:54, Bing Zhao:
> +/**
> + * The packet is valid after conntrack checking.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
> +/**
> + * The state of the connection is changed.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
> +/**
> + * Error is detected on this packet for this connection and
> + * an invalid state is set.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
> +/**
> + * The HW connection tracking module is disabled.
> + * It can be due to application command or an invalid state.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
> +/**
> + * The packet contains some bad field(s) and cannot continue
> + * with the conntrack module checking.
> + */
> +#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
I like it better now that all bits have the same prefix, thanks.
> +enum rte_flow_conntrack_state {
> + /**< SYN-ACK packet was seen. */
> + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> + /**< 3-way handshake was done. */
> + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> + /**< First FIN packet was received to close the connection. */
> + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> + /**< First FIN was ACKed. */
> + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> + /**< Second FIN was received, waiting for the last ACK. */
> + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> + /**< Second FIN was ACKed, connection was closed. */
> + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> +};
These doxygen comments should not have "<" in them,
because they are "before".
[...]
> + /** No TCP check will be done except the state change. */
> + uint32_t liberal_mode:1;
> + /**<The current state of this connection. */
s,/**<,/** ,
> + enum rte_flow_conntrack_state state;
Looks good overrall, thanks.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-16 18:30 ` Ajit Khaparde
@ 2021-04-19 14:08 ` Thomas Monjalon
2021-04-19 16:21 ` Bing Zhao
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-19 14:08 UTC (permalink / raw)
To: Bing Zhao, Ajit Khaparde
Cc: dev, Ori Kam, Ferruh Yigit, Andrew Rybchenko, dpdk-dev, Xiaoyun Li
16/04/2021 20:30, Ajit Khaparde:
> On Fri, Apr 16, 2021 at 10:54 AM Bing Zhao <bingz@nvidia.com> wrote:
> > +struct rte_flow_action_conntrack {
> > + /** The peer port number, can be the same port. */
> > + uint16_t peer_port;
> > + /**
> > + * Direction of this connection when creating a flow, the value
> > + * only affects the subsequent flows creation.
>
> s/flows/flow
> or
> s/the subsequent flows creation/the creation of subsequent flows
s/flows/flow rules/
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-19 14:06 ` Thomas Monjalon
@ 2021-04-19 16:13 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 16:13 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon
Cc: Ori Kam, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, April 19, 2021 10:06 PM
> To: Bing Zhao <bingz@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; ferruh.yigit@intel.com;
> andrew.rybchenko@oktetlabs.ru; dev@dpdk.org;
> ajit.khaparde@broadcom.com; xiaoyun.li@intel.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack
> flow action and item
>
> External email: Use caution opening links or attachments
>
>
> 16/04/2021 19:54, Bing Zhao:
> > +/**
> > + * The packet is valid after conntrack checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
> > +/**
> > + * The state of the connection is changed.
> > + */
> > +#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
> > +/**
> > + * Error is detected on this packet for this connection and
> > + * an invalid state is set.
> > + */
> > +#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
> > +/**
> > + * The HW connection tracking module is disabled.
> > + * It can be due to application command or an invalid state.
> > + */
> > +#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
> > +/**
> > + * The packet contains some bad field(s) and cannot continue
> > + * with the conntrack module checking.
> > + */
> > +#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
>
> I like it better now that all bits have the same prefix, thanks.
>
> > +enum rte_flow_conntrack_state {
> > + /**< SYN-ACK packet was seen. */
> > + RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
> > + /**< 3-way handshake was done. */
> > + RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
> > + /**< First FIN packet was received to close the connection.
> */
> > + RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
> > + /**< First FIN was ACKed. */
> > + RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
> > + /**< Second FIN was received, waiting for the last ACK. */
> > + RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
> > + /**< Second FIN was ACKed, connection was closed. */
> > + RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
> > +};
>
> These doxygen comments should not have "<" in them, because they are
> "before".
All "<" are removed, thanks.
>
> [...]
> > + /** No TCP check will be done except the state change. */
> > + uint32_t liberal_mode:1;
> > + /**<The current state of this connection. */
>
> s,/**<,/** ,
>
> > + enum rte_flow_conntrack_state state;
>
> Looks good overrall, thanks.
>
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack flow action and item
2021-04-19 14:08 ` Thomas Monjalon
@ 2021-04-19 16:21 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 16:21 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon, Ajit Khaparde
Cc: dev, Ori Kam, Ferruh Yigit, Andrew Rybchenko, dpdk-dev, Xiaoyun Li
Hi,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, April 19, 2021 10:08 PM
> To: Bing Zhao <bingz@nvidia.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; dpdk-dev <dev@dpdk.org>; Xiaoyun Li
> <xiaoyun.li@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 1/3] ethdev: introduce conntrack
> flow action and item
>
> External email: Use caution opening links or attachments
>
>
> 16/04/2021 20:30, Ajit Khaparde:
> > On Fri, Apr 16, 2021 at 10:54 AM Bing Zhao <bingz@nvidia.com>
> wrote:
> > > +struct rte_flow_action_conntrack {
> > > + /** The peer port number, can be the same port. */
> > > + uint16_t peer_port;
> > > + /**
> > > + * Direction of this connection when creating a flow,
> the value
> > > + * only affects the subsequent flows creation.
> >
> > s/flows/flow
> > or
> > s/the subsequent flows creation/the creation of subsequent flows
>
> s/flows/flow rules/
Done
>
>
>
BR. Bing
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
` (2 preceding siblings ...)
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-04-19 17:16 ` Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 1/3] " Bing Zhao
` (2 more replies)
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item Bing Zhao
4 siblings, 3 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:16 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
This patch set includes the conntrack action and item definitions as
well as the testpmd CLI proposal.
Documents of release notes and guides are also updated.
---
v2: add testpmd CLI proposal
v3: add doc update
v4: fix building and address comments for doc and header file
---
Bing Zhao (3):
ethdev: introduce conntrack flow action and item
app/testpmd: add CLI for conntrack
doc: update for conntrack
app/test-pmd/cmdline.c | 355 ++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 +++++
app/test-pmd/config.c | 65 +++-
app/test-pmd/testpmd.h | 2 +
doc/guides/prog_guide/rte_flow.rst | 118 +++++++
doc/guides/rel_notes/release_21_05.rst | 4 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 212 ++++++++++++
9 files changed, 884 insertions(+), 1 deletion(-)
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] ethdev: introduce conntrack flow action and item
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-04-19 17:16 ` Bing Zhao
2021-04-19 17:33 ` Ori Kam
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 3/3] doc: update " Bing Zhao
2 siblings, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:16 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 212 +++++++++++++++++++++++++++++++++++
2 files changed, 214 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 0d2610b7c4..c7c7108933 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+ MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
};
/** Generate flow_action[] entry. */
@@ -186,6 +187,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
* indirect action handle.
*/
MK_FLOW_ACTION(INDIRECT, 0),
+ MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
};
int
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 0447d36002..dae16b3433 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -30,6 +30,7 @@
#include <rte_esp.h>
#include <rte_higig.h>
#include <rte_ecpri.h>
+#include <rte_bitops.h>
#include <rte_mbuf.h>
#include <rte_mbuf_dyn.h>
@@ -551,6 +552,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * @see struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1685,6 +1695,51 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is valid after conntrack checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
+/**
+ * The state of the connection is changed.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
+/**
+ * Error is detected on this packet for this connection and
+ * an invalid state is set.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
+/**
+ * The packet contains some bad field(s) and cannot continue
+ * with the conntrack module checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bitmap of one RTE_FLOW_CONNTRACK_PKT_STATE*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2278,6 +2333,15 @@ enum rte_flow_action_type {
* or different ethdev ports.
*/
RTE_FLOW_ACTION_TYPE_INDIRECT,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * @see struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2876,6 +2940,154 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_action_handle;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ /** SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /** 3-way handshake was done. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /** First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /** First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /** Second FIN was received, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /** Second FIN was ACKed, connection was closed. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_tcp_last_index {
+ RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYN = RTE_BIT32(0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYNACK = RTE_BIT32(1), /**< With SYNACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_FIN = RTE_BIT32(2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_ACK = RTE_BIT32(3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_RST = RTE_BIT32(4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ * All fields should be in host byte order.
+ * If needed, driver should convert all fields to network byte order
+ * if HW needs them in that way.
+ */
+struct rte_flow_tcp_dir_param {
+ /** TCP window scaling factor, 0xF to disable. */
+ uint32_t scale:4;
+ /** The FIN was sent by this direction. */
+ uint32_t close_initiated:1;
+ /** An ACK packet has been received by this side. */
+ uint32_t last_ack_seen:1;
+ /**
+ * If set, it indicates that there is unacknowledged data for the
+ * packets sent from this direction.
+ */
+ uint32_t data_unacked:1;
+ /**
+ * Maximal value of sequence + payload length in sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t sent_end;
+ /**
+ * Maximal value of (ACK + window size) in received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t reply_end;
+ /** Maximal value of actual window size in sent packets. */
+ uint32_t max_win;
+ /** Maximal value of ACK in sent packets. */
+ uint32_t max_ack;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ * All fields should be in host byte order.
+ */
+struct rte_flow_action_conntrack {
+ /** The peer port number, can be the same port. */
+ uint16_t peer_port;
+ /**
+ * Direction of this connection when creating a flow rule, the
+ * value only affects the creation of subsequent flow rules.
+ */
+ uint32_t is_original_dir:1;
+ /**
+ * Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ * It only affects this conntrack object in the HW without any effect
+ * to the other objects.
+ */
+ uint32_t enable:1;
+ /** At least one ack was seen after the connection was established. */
+ uint32_t live_connection:1;
+ /** Enable selective ACK on this connection. */
+ uint32_t selective_ack:1;
+ /** A challenge ack has passed. */
+ uint32_t challenge_ack_passed:1;
+ /**
+ * 1: The last packet is seen from the original direction.
+ * 0: The last packet is seen from the reply direction.
+ */
+ uint32_t last_direction:1;
+ /** No TCP check will be done except the state change. */
+ uint32_t liberal_mode:1;
+ /**<The current state of this connection. */
+ enum rte_flow_conntrack_state state;
+ /** Scaling factor for maximal allowed ACK window. */
+ uint8_t max_ack_window;
+ /** Maximal allowed number of retransmission times. */
+ uint8_t retransmission_limit;
+ /** TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /** TCP parameters of the reply direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /** The window value of the last packet passed this conntrack. */
+ uint16_t last_window;
+ enum rte_flow_conntrack_tcp_last_index last_index;
+ /** The sequence of the last packet passed this conntrack. */
+ uint32_t last_seq;
+ /** The acknowledgment of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**
+ * The total value ACK + payload length of the last packet
+ * passed this conntrack.
+ */
+ uint32_t last_end;
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ /** New connection tracking parameters to be updated. */
+ struct rte_flow_action_conntrack new_ct;
+ /** The direction field will be updated. */
+ uint32_t direction:1;
+ /** All the other fields except direction will be updated. */
+ uint32_t state:1;
+ /** Reserved bits for the future usage. */
+ uint32_t reserved:30;
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 1/3] " Bing Zhao
@ 2021-04-19 17:16 ` Bing Zhao
2021-04-19 17:35 ` Ori Kam
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 3/3] doc: update " Bing Zhao
2 siblings, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:16 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
app/test-pmd/cmdline.c | 355 ++++++++++++++++++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 ++++++++++
app/test-pmd/config.c | 65 ++++++-
app/test-pmd/testpmd.h | 2 +
4 files changed, 513 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4d9e038ce8..d282c7cad6 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13621,6 +13621,359 @@ cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
},
};
+/** Set connection tracking object common details */
+struct cmd_set_conntrack_common_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t common;
+ cmdline_fixed_string_t peer;
+ cmdline_fixed_string_t is_orig;
+ cmdline_fixed_string_t enable;
+ cmdline_fixed_string_t live;
+ cmdline_fixed_string_t sack;
+ cmdline_fixed_string_t cack;
+ cmdline_fixed_string_t last_dir;
+ cmdline_fixed_string_t liberal;
+ cmdline_fixed_string_t state;
+ cmdline_fixed_string_t max_ack_win;
+ cmdline_fixed_string_t retrans;
+ cmdline_fixed_string_t last_win;
+ cmdline_fixed_string_t last_seq;
+ cmdline_fixed_string_t last_ack;
+ cmdline_fixed_string_t last_end;
+ cmdline_fixed_string_t last_index;
+ uint8_t stat;
+ uint8_t factor;
+ uint16_t peer_port;
+ uint32_t is_original;
+ uint32_t en;
+ uint32_t is_live;
+ uint32_t s_ack;
+ uint32_t c_ack;
+ uint32_t ld;
+ uint32_t lb;
+ uint8_t re_num;
+ uint8_t li;
+ uint16_t lw;
+ uint32_t ls;
+ uint32_t la;
+ uint32_t le;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_common_com =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ common, "com");
+cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer, "peer");
+cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer_port, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_orig, "is_orig");
+cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_original, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ enable, "enable");
+cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ en, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_live =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ live, "live");
+cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_live, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ sack, "sack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ s_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ cack, "cack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ c_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_dir, "last_dir");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ld, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ liberal, "liberal");
+cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lb, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_state =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ state, "state");
+cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ stat, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ max_ack_win, "max_ack_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_max_ackwin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ factor, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ retrans, "r_lim");
+cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ re_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_win, "last_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lw, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_seq, "last_seq");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ls, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_ack, "last_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ la, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_end, "last_end");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ le, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_index =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_index, "last_index");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_index_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ li, RTE_UINT8);
+
+static void cmd_set_conntrack_common_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_common_result *res = parsed_result;
+
+ /* No need to swap to big endian. */
+ conntrack_context.peer_port = res->peer_port;
+ conntrack_context.is_original_dir = res->is_original;
+ conntrack_context.enable = res->en;
+ conntrack_context.live_connection = res->is_live;
+ conntrack_context.selective_ack = res->s_ack;
+ conntrack_context.challenge_ack_passed = res->c_ack;
+ conntrack_context.last_direction = res->ld;
+ conntrack_context.liberal_mode = res->lb;
+ conntrack_context.state = (enum rte_flow_conntrack_state)res->stat;
+ conntrack_context.max_ack_window = res->factor;
+ conntrack_context.retransmission_limit = res->re_num;
+ conntrack_context.last_window = res->lw;
+ conntrack_context.last_index =
+ (enum rte_flow_conntrack_tcp_last_index)res->li;
+ conntrack_context.last_seq = res->ls;
+ conntrack_context.last_ack = res->la;
+ conntrack_context.last_end = res->le;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_common = {
+ .f = cmd_set_conntrack_common_parsed,
+ .data = NULL,
+ .help_str = "set conntrack com peer <port_id> is_orig <dir> enable <en>"
+ " live <ack_seen> sack <en> cack <passed> last_dir <dir>"
+ " liberal <en> state <s> max_ack_win <factor> r_lim <num>"
+ " last_win <win> last_seq <seq> last_ack <ack> last_end <end>"
+ " last_index <flag>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_common_com,
+ (void *)&cmd_set_conntrack_common_peer,
+ (void *)&cmd_set_conntrack_common_peer_value,
+ (void *)&cmd_set_conntrack_common_is_orig,
+ (void *)&cmd_set_conntrack_common_is_orig_value,
+ (void *)&cmd_set_conntrack_common_enable,
+ (void *)&cmd_set_conntrack_common_enable_value,
+ (void *)&cmd_set_conntrack_common_live,
+ (void *)&cmd_set_conntrack_common_live_value,
+ (void *)&cmd_set_conntrack_common_sack,
+ (void *)&cmd_set_conntrack_common_sack_value,
+ (void *)&cmd_set_conntrack_common_cack,
+ (void *)&cmd_set_conntrack_common_cack_value,
+ (void *)&cmd_set_conntrack_common_last_dir,
+ (void *)&cmd_set_conntrack_common_last_dir_value,
+ (void *)&cmd_set_conntrack_common_liberal,
+ (void *)&cmd_set_conntrack_common_liberal_value,
+ (void *)&cmd_set_conntrack_common_state,
+ (void *)&cmd_set_conntrack_common_state_value,
+ (void *)&cmd_set_conntrack_common_max_ackwin,
+ (void *)&cmd_set_conntrack_common_max_ackwin_value,
+ (void *)&cmd_set_conntrack_common_retrans,
+ (void *)&cmd_set_conntrack_common_retrans_value,
+ (void *)&cmd_set_conntrack_common_last_win,
+ (void *)&cmd_set_conntrack_common_last_win_value,
+ (void *)&cmd_set_conntrack_common_last_seq,
+ (void *)&cmd_set_conntrack_common_last_seq_value,
+ (void *)&cmd_set_conntrack_common_last_ack,
+ (void *)&cmd_set_conntrack_common_last_ack_value,
+ (void *)&cmd_set_conntrack_common_last_end,
+ (void *)&cmd_set_conntrack_common_last_end_value,
+ (void *)&cmd_set_conntrack_common_last_index,
+ (void *)&cmd_set_conntrack_common_last_index_value,
+ NULL,
+ },
+};
+
+/** Set connection tracking object both directions' details */
+struct cmd_set_conntrack_dir_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t dir;
+ cmdline_fixed_string_t scale;
+ cmdline_fixed_string_t fin;
+ cmdline_fixed_string_t ack_seen;
+ cmdline_fixed_string_t unack;
+ cmdline_fixed_string_t sent_end;
+ cmdline_fixed_string_t reply_end;
+ cmdline_fixed_string_t max_win;
+ cmdline_fixed_string_t max_ack;
+ uint32_t factor;
+ uint32_t f;
+ uint32_t as;
+ uint32_t un;
+ uint32_t se;
+ uint32_t re;
+ uint32_t mw;
+ uint32_t ma;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ dir, "orig#rply");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ scale, "scale");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ factor, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ fin, "fin");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ f, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ack_seen, "acked");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ as, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ unack, "unack_data");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ un, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ sent_end, "sent_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ se, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ reply_end, "reply_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ re, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_win, "max_win");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ mw, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_ack, "max_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ma, RTE_UINT32);
+
+static void cmd_set_conntrack_dir_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_dir_result *res = parsed_result;
+ struct rte_flow_tcp_dir_param *dir = NULL;
+
+ if (strcmp(res->dir, "orig") == 0)
+ dir = &conntrack_context.original_dir;
+ else if (strcmp(res->dir, "rply") == 0)
+ dir = &conntrack_context.reply_dir;
+ else
+ return;
+ dir->scale = res->factor;
+ dir->close_initiated = res->f;
+ dir->last_ack_seen = res->as;
+ dir->data_unacked = res->un;
+ dir->sent_end = res->se;
+ dir->reply_end = res->re;
+ dir->max_ack = res->ma;
+ dir->max_win = res->mw;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_dir = {
+ .f = cmd_set_conntrack_dir_parsed,
+ .data = NULL,
+ .help_str = "set conntrack orig|rply scale <factor> fin <sent>"
+ " acked <seen> unack_data <unack> sent_end <sent>"
+ " reply_end <reply> max_win <win> max_ack <ack>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_dir_dir,
+ (void *)&cmd_set_conntrack_dir_scale,
+ (void *)&cmd_set_conntrack_dir_scale_value,
+ (void *)&cmd_set_conntrack_dir_fin,
+ (void *)&cmd_set_conntrack_dir_fin_value,
+ (void *)&cmd_set_conntrack_dir_ack,
+ (void *)&cmd_set_conntrack_dir_ack_value,
+ (void *)&cmd_set_conntrack_dir_unack_data,
+ (void *)&cmd_set_conntrack_dir_unack_data_value,
+ (void *)&cmd_set_conntrack_dir_sent_end,
+ (void *)&cmd_set_conntrack_dir_sent_end_value,
+ (void *)&cmd_set_conntrack_dir_reply_end,
+ (void *)&cmd_set_conntrack_dir_reply_end_value,
+ (void *)&cmd_set_conntrack_dir_max_win,
+ (void *)&cmd_set_conntrack_dir_max_win_value,
+ (void *)&cmd_set_conntrack_dir_max_ack,
+ (void *)&cmd_set_conntrack_dir_max_ack_value,
+ NULL,
+ },
+};
+
/* Strict link priority scheduling mode setting */
static void
cmd_strict_link_prio_parsed(
@@ -17120,6 +17473,8 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
(cmdline_parse_inst_t *)&cmd_ddp_add,
(cmdline_parse_inst_t *)&cmd_ddp_del,
(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c5381c638b..e2b09cf16d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,7 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_CONNTRACK,
/* Validate/create actions. */
ACTIONS,
@@ -431,6 +432,10 @@ enum index {
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_WIDTH,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -569,6 +574,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
struct mplsoudp_decap_conf mplsoudp_decap_conf;
+struct rte_flow_action_conntrack conntrack_context;
+
#define ACTION_SAMPLE_ACTIONS_NUM 10
#define RAW_SAMPLE_CONFS_MAX_NUM 8
/** Storage for struct rte_flow_action_sample including external data. */
@@ -968,6 +975,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_CONNTRACK,
END_SET,
ZERO,
};
@@ -1382,6 +1390,8 @@ static const enum index next_action[] = {
ACTION_SAMPLE,
ACTION_INDIRECT,
ACTION_MODIFY_FIELD,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
ZERO,
};
@@ -1650,6 +1660,13 @@ static const enum index action_modify_field_src[] = {
ZERO,
};
+static const enum index action_update_conntrack[] = {
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1740,6 +1757,10 @@ static int
parse_vc_modify_field_id(struct context *ctx, const struct token *token,
const char *str, unsigned int len, void *buf,
unsigned int size);
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size);
static int parse_destroy(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -3400,6 +3421,13 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "conntrack state",
+ .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -4498,6 +4526,34 @@ static const struct token token_list[] = {
.call = parse_vc_action_sample_index,
.comp = comp_set_sample_index,
},
+ [ACTION_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "create a conntrack object",
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_action_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE] = {
+ .name = "conntrack_update",
+ .help = "update a conntrack object",
+ .next = NEXT(action_update_conntrack),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_modify_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE_DIR] = {
+ .name = "dir",
+ .help = "update a conntrack object direction",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
+ [ACTION_CONNTRACK_UPDATE_CTX] = {
+ .name = "ctx",
+ .help = "update a conntrack object context",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
@@ -6304,6 +6360,42 @@ parse_vc_modify_field_id(struct context *ctx, const struct token *token,
return len;
}
+/** Parse the conntrack update, not a rte_flow_action. */
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size)
+{
+ struct buffer *out = buf;
+ struct rte_flow_modify_conntrack *ct_modify = NULL;
+
+ (void)size;
+ if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
+ ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
+ return -1;
+ /* Token name must match. */
+ if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+ return -1;
+ ct_modify = (struct rte_flow_modify_conntrack *)out->args.vc.data;
+ /* Nothing else to do if there is no buffer. */
+ if (!out)
+ return len;
+ if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
+ ct_modify->new_ct.is_original_dir =
+ conntrack_context.is_original_dir;
+ ct_modify->direction = 1;
+ } else {
+ uint32_t old_dir;
+
+ old_dir = ct_modify->new_ct.is_original_dir;
+ memcpy(&ct_modify->new_ct, &conntrack_context,
+ sizeof(conntrack_context));
+ ct_modify->new_ct.is_original_dir = old_dir;
+ ct_modify->state = 1;
+ }
+ return len;
+}
+
/** Parse tokens for destroy command. */
static int
parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 868ff3469b..787d45afbd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1483,6 +1483,11 @@ port_action_handle_create(portid_t port_id, uint32_t id,
pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
age->context = &pia->age_type;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
+ struct rte_flow_action_conntrack *ct =
+ (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf);
+
+ memcpy(ct, &conntrack_context, sizeof(*ct));
}
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x22, sizeof(error));
@@ -1564,11 +1569,24 @@ port_action_handle_update(portid_t port_id, uint32_t id,
{
struct rte_flow_error error;
struct rte_flow_action_handle *action_handle;
+ struct port_indirect_action *pia;
+ const void *update;
action_handle = port_action_handle_get_by_id(port_id, id);
if (!action_handle)
return -EINVAL;
- if (rte_flow_action_handle_update(port_id, action_handle, action,
+ pia = action_get_by_id(port_id, id);
+ if (!pia)
+ return -EINVAL;
+ switch (pia->type) {
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ update = action->conf;
+ break;
+ default:
+ update = action;
+ break;
+ }
+ if (rte_flow_action_handle_update(port_id, action_handle, update,
&error)) {
return port_flow_complain(&error);
}
@@ -1621,6 +1639,51 @@ port_action_handle_query(portid_t port_id, uint32_t id)
}
data = NULL;
break;
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ if (!ret) {
+ struct rte_flow_action_conntrack *ct = data;
+
+ printf("Conntrack Context:\n"
+ " Peer: %u, Flow dir: %s, Enable: %u\n"
+ " Live: %u, SACK: %u, CACK: %u\n"
+ " Packet dir: %s, Liberal: %u, State: %u\n"
+ " Factor: %u, Retrans: %u, TCP flags: %u\n"
+ " Last Seq: %u, Last ACK: %u\n"
+ " Last Win: %u, Last End: %u\n",
+ ct->peer_port,
+ ct->is_original_dir ? "Original" : "Reply",
+ ct->enable, ct->live_connection,
+ ct->selective_ack, ct->challenge_ack_passed,
+ ct->last_direction ? "Original" : "Reply",
+ ct->liberal_mode, ct->state,
+ ct->max_ack_window, ct->retransmission_limit,
+ ct->last_index, ct->last_seq, ct->last_ack,
+ ct->last_window, ct->last_end);
+ printf(" Original Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->original_dir.scale,
+ ct->original_dir.close_initiated,
+ ct->original_dir.last_ack_seen,
+ ct->original_dir.data_unacked,
+ ct->original_dir.sent_end,
+ ct->original_dir.reply_end,
+ ct->original_dir.max_win,
+ ct->original_dir.max_ack);
+ printf(" Reply Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->reply_dir.scale,
+ ct->reply_dir.close_initiated,
+ ct->reply_dir.last_ack_seen,
+ ct->reply_dir.data_unacked,
+ ct->reply_dir.sent_end, ct->reply_dir.reply_end,
+ ct->reply_dir.max_win, ct->reply_dir.max_ack);
+ }
+ data = NULL;
+ break;
default:
printf("Indirect action %u (type: %d) on port %u doesn't"
" support query\n", id, pia->type, port_id);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c314b30f2e..9530ec5fe0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf mplsoudp_decap_conf;
extern enum rte_eth_rx_mq_mode rx_mq_mode;
+extern struct rte_flow_action_conntrack conntrack_context;
+
static inline unsigned int
lcore_num(void)
{
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc: update for conntrack
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 1/3] " Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack Bing Zhao
@ 2021-04-19 17:16 ` Bing Zhao
2021-04-19 17:32 ` Thomas Monjalon
2021-04-19 17:37 ` Ori Kam
2 siblings, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:16 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
The updated documentations include:
1. Release notes
2. rte_flow.rst
3. testpmd user guide
Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 118 ++++++++++++++++++++
doc/guides/rel_notes/release_21_05.rst | 4 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++++++
3 files changed, 157 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 4b54588995..caabc49143 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,14 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^
+
+Matches a conntrack state after conntrack action.
+
+- ``flags``: conntrack packet state flags.
+- Default ``mask`` matches all state bits.
+
Actions
~~~~~~~
@@ -2842,6 +2850,116 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
| ``value`` | immediate value or a pointer to this value |
+---------------+----------------------------------------------------------+
+Action: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^^^
+
+Create a conntrack (connection tracking) context with the provided information.
+
+In stateful session like TCP, the conntrack action provides the ability to
+examine every packet of this connection and associate the state to every
+packet. It will help to realize the stateful offload of connections with little
+software participation. For example, the packets with invalid state may be
+handled by the software. The control packets could be handled in the hardware.
+The software just need to query the state of a connection when needed, and then
+decide how to handle the flow rules and conntrack context.
+
+A conntrack context should be created via ``rte_flow_action_handle_create()``
+before using. Then the handle with ``INDIRECT`` type is used for a flow rule
+creation. If a flow rule with an opposite direction needs to be created, the
+``rte_flow_action_handle_update()`` should be used to modify the direction.
+
+Not all the fields of the ``struct rte_flow_action_conntrack`` will be used
+for a conntrack context creating, depending on the HW, and they should be
+in host byte order. PMD should convert them into network byte order when
+needed by the HW.
+
+The ``struct rte_flow_modify_conntrack`` should be used for an updating.
+
+The current conntrack context information could be queried via the
+``rte_flow_action_handle_query()`` interface.
+
+.. _table_rte_flow_action_conntrack:
+
+.. table:: CONNTRACK
+
+ +--------------------------+-------------------------------------------------------------+
+ | Field | Value |
+ +==========================+=============================================================+
+ | ``peer_port`` | peer port number |
+ +--------------------------+-------------------------------------------------------------+
+ | ``is_original_dir`` | direction of this connection for creating flow rule |
+ +--------------------------+-------------------------------------------------------------+
+ | ``enable`` | enable the conntrack context |
+ +--------------------------+-------------------------------------------------------------+
+ | ``live_connection`` | one ack was seen for this connection |
+ +--------------------------+-------------------------------------------------------------+
+ | ``selective_ack`` | SACK enabled |
+ +--------------------------+-------------------------------------------------------------+
+ | ``challenge_ack_passed`` | a challenge ack has passed |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_direction`` | direction of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``liberal_mode`` | only report state change |
+ +--------------------------+-------------------------------------------------------------+
+ | ``state`` | current state |
+ +--------------------------+-------------------------------------------------------------+
+ | ``max_ack_window`` | maximal window scaling factor |
+ +--------------------------+-------------------------------------------------------------+
+ | ``retransmission_limit`` | maximal retransmission times |
+ +--------------------------+-------------------------------------------------------------+
+ | ``original_dir`` | TCP parameters of the original direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``reply_dir`` | TCP parameters of the reply direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_window`` | window value of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_seq`` | sequence value of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_ack`` | acknowledgment value the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_end`` | sum of ack number and length of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+
+.. _table_rte_flow_tcp_dir_param:
+
+.. table:: configuration parameters for each direction
+
+ +---------------------+---------------------------------------------------------+
+ | Field | Value |
+ +=====================+=========================================================+
+ | ``scale`` | TCP window scaling factor |
+ +---------------------+---------------------------------------------------------+
+ | ``close_initiated`` | FIN sent from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``last_ack_seen`` | an ACK packet received |
+ +---------------------+---------------------------------------------------------+
+ | ``data_unacked`` | unacknowledged data for packets from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``sent_end`` | max{seq + len} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``reply_end`` | max{sack + max{win, 1}} seen in reply packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_win`` | max{max{win, 1}} + {sack - ack} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_ack`` | max{ack} + seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+
+.. _table_rte_flow_modify_conntrack:
+
+.. table:: update a conntrack context
+
+ +----------------+-------------------------------------------------+
+ | Field | Value |
+ +================+=================================================+
+ | ``new_ct`` | new conntrack information |
+ +----------------+-------------------------------------------------+
+ | ``direction`` | direction will be updated |
+ +----------------+-------------------------------------------------+
+ | ``state`` | other fields except direction will be updated |
+ +----------------+-------------------------------------------------+
+ | ``reserved`` | reserved bits |
+ +----------------+-------------------------------------------------+
+
Negative types
~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8913dd4f9c..fb978aebe3 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -203,6 +203,10 @@ New Features
the events across multiple stages.
* This also reduced the scheduling overhead on a event device.
+* **Added conntrack support for rte_flow.**
+
+ * Added conntrack action and item for stateful offloading.
+
* **Updated testpmd.**
* Added a command line option to configure forced speed for Ethernet port.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 715e209fd2..efa32bb6ad 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3789,6 +3789,8 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``conntrack``: match conntrack state.
+
Actions list
^^^^^^^^^^^^
@@ -4927,6 +4929,39 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample conntrack rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Conntrack rules can be set by the following commands
+
+Need to construct the connection context with provided information.
+In the first table, create a flow rule by using conntrack action and jump to
+the next table. In the next table, create a rule to check the state.
+
+::
+
+ testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0
+ last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510
+ last_seq 2632987379 last_ack 2532480967 last_end 2632987379
+ last_index 0x8
+ testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2632987379 reply_end 2633016339 max_win 28960
+ max_ack 2632987379
+ testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2532480967 reply_end 2532546247 max_win 65280
+ max_ack 2532480967
+ testpmd> flow indirect_action 0 create ingress action conntrack / end
+ testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp / end actions indirect 0 / jump group 5 / end
+ testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack is 1 / end actions queue index 5 / end
+
+Construct the conntrack again with only "is_orig" set to 0 (other fields are
+ignored), then use "update" interface to update the direction. Create flow
+rules like above for the peer port.
+
+::
+
+ testpmd> flow indirect_action 0 update 0 action conntrack_update dir / end
+
BPF Functions
--------------
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/3] doc: update for conntrack
2021-04-16 18:30 ` Ajit Khaparde
@ 2021-04-19 17:28 ` Bing Zhao
0 siblings, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:28 UTC (permalink / raw)
To: Ajit Khaparde
Cc: Ori Kam, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, dpdk-dev, Xiaoyun Li
Hi Ajit,
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Saturday, April 17, 2021 2:30 AM
> To: Bing Zhao <bingz@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>; dpdk-dev <dev@dpdk.org>;
> Xiaoyun Li <xiaoyun.li@intel.com>
> Subject: Re: [PATCH v3 3/3] doc: update for conntrack
>
> On Fri, Apr 16, 2021 at 10:54 AM Bing Zhao <bingz@nvidia.com> wrote:
> >
> > The updated documentations include:
> > 1. Release notes
> > 2. rte_flow.rst
> > 3. testpmd user guide
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 113
> ++++++++++++++++++++
> > doc/guides/rel_notes/release_21_05.rst | 4 +
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++++++
> > 3 files changed, 152 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index 2ecc48cfff..a1333819fc 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,14 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``CONNTRACK``
> > +^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches a conntrack state after conntrack action.
> > +
> > +- ``flags``: conntrack packet state flags.
> > +- Default ``mask`` matches all state bits.
> > +
> > Actions
> > ~~~~~~~
> >
> > @@ -2842,6 +2850,111 @@ for ``RTE_FLOW_FIELD_VALUE`` and
> ``RTE_FLOW_FIELD_POINTER`` respectively.
> > | ``value`` | immediate value or a pointer to this value
> |
> > +---------------+---------------------------------------------
> -------------+
> >
> > +Action: ``CONNTRACK``
> > +^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Create a conntrack (connection tracking) context with the
> provided information.
> > +
> > +In stateful session like TCP, the conntrack action provides the
> ability to
> > +examine every packet of this connection and associate the state
> to every
> > +packet. It will help to realize the stateful offloading with
> little software
> s/stateful offloading/stateful offload of connections
>
> > +participation. For example, only the control packets like SYN /
> FIN or packets
> > +with invalid state should be handled by the software.
> s/invalid state should be handled by the software/invalid state may
> be
> handled by the software while the rest of the control frames may be
> handled in hardware.
>
I updated this part, please take a review.
In general, the control packets could be handled by HW and SW could get
a state change state of the packet. The SW could also handle the control
packet if there is a flow rule for the state change.
> > +
> > +A conntrack context should be created via
> ``rte_flow_action_handle_create()``
> > +before using. Then the handle with ``INDIRECT`` type is used for
> a flow rule
> > +creation. If a flow rule with an opposite direction needs to be
> created, the
> > +``rte_flow_action_handle_update()`` should be used to modify the
> direction.
> > +
> > +Not all the fields of the ``struct rte_flow_action_conntrack``
> will be used
> > +for a conntrack context creating, depending on the HW.
> s/context creating/context creation.
> s/depending on the HW./This capability will depend on the underlying
> hardware
>
> > +The ``struct rte_flow_modify_conntrack`` should be used for an
> updating.
> > +
> > +The current conntrack context information could be queried via
> the
> > +``rte_flow_action_handle_query()`` interface.
> > +
> > +.. _table_rte_flow_action_conntrack:
> > +
> > +.. table:: CONNTRACK
> > +
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | Field | Value
> |
> > +
> +==========================+========================================
> =====================+
> > + | ``peer_port`` | peer port number
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``is_original_dir`` | direction of this connection for
> flow rule creating |
> s/for flow rule creating/for creating flow rule
>
>
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``enable`` | enable the conntrack context
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``live_connection`` | one ack was seen for this
> connection |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``selective_ack`` | SACK enabled
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``challenge_ack_passed`` | a challenge ack has passed
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``last_direction`` | direction of the last passed
> packet |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``liberal_mode`` | only report state change
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``state`` | current state
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``max_ack_window`` | maximal window scaling factor
> |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``retransmission_limit`` | maximal retransmission times
> |
> s/times/limit
>
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``original_dir`` | TCP parameters of the original
> direction |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``reply_dir`` | TCP parameters of the reply
> direction |
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``last_window`` | window value of the last passed
> packet |
> s/value/size
Done
>
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``last_seq`` | sequence value of the last passed
> packet |
> s/value/number
Agree, thanks
>
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``last_ack`` | acknowledgement value the last
> passed packet |
> s/value/number
Thanks
>
> > + +--------------------------+----------------------------------
> ---------------------------+
> > + | ``last_end`` | sum acknowledgement and length
> value the last passed packet |
> sum of ack number and length of the last passed packet
> or
> sum of acknowledgement number and length of the last passed packet
>
Updated, thanks. Also update the typo
> > + +--------------------------+----------------------------------
> ---------------------------+
> > +
> > +.. _table_rte_flow_tcp_dir_param:
> > +
> > +.. table:: configuration parameters for each direction
> > +
> > + +---------------------+---------------------------------------
> ------------------+
> > + | Field | Value
> |
> > +
> +=====================+=============================================
> ============+
> > + | ``scale`` | TCP window scaling factor
> |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``close_initiated`` | FIN sent from this direction
> |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``last_ack_seen`` | an ACK packet received
> |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``data_unacked`` | unacknowledged data for packets from
> this direction |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``sent_end`` | max{seq + len} seen in sent packets
> |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``reply_end`` | max{sack + max{win, 1}} seen in reply
> packets |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``max_win`` | max{max{win, 1}} + {sack - ack} seen
> in sent packets |
> > + +---------------------+---------------------------------------
> ------------------+
> > + | ``max_ack`` | max{ack} + seen in sent packets
> |
> > + +---------------------+---------------------------------------
> ------------------+
> > +
> > +.. _table_rte_flow_modify_conntrack:
> > +
> > +.. table:: update a conntrack context
> > +
> > + +----------------+---------------------------------------+
> > + | Field | Value |
> > + +================+=======================================+
> > + | ``new_ct`` | new conntrack information |
> > + +----------------+---------------------------------------+
> > + | ``direction`` | direction will be updated |
> > + +----------------+---------------------------------------+
> > + | ``state`` | other fields except will be updated |
> except what?
> direction??
Yes, missed this word, updated.
>
> > + +----------------+---------------------------------------+
> > + | ``reserved`` | reserved bits |
> > + +----------------+---------------------------------------+
> > +
> > Negative types
> > ~~~~~~~~~~~~~~
> >
> > diff --git a/doc/guides/rel_notes/release_21_05.rst
> b/doc/guides/rel_notes/release_21_05.rst
> > index e6f99350af..824eb72981 100644
> > --- a/doc/guides/rel_notes/release_21_05.rst
> > +++ b/doc/guides/rel_notes/release_21_05.rst
> > @@ -183,6 +183,10 @@ New Features
> > the events across multiple stages.
> > * This also reduced the scheduling overhead on a event device.
> >
> > +* **Added conntrack support for rte_flow.**
> > +
> > + * Added conntrack action and item for stateful offloading.
> > +
> > * **Updated testpmd.**
> >
> > * Added a command line option to configure forced speed for
> Ethernet port.
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 1fa6e2000e..4c029776aa 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -3791,6 +3791,8 @@ This section lists supported pattern items
> and their attributes, if any.
> > - ``s_field {unsigned}``: S field.
> > - ``seid {unsigned}``: session endpoint identifier.
> >
> > +- ``conntrack``: match conntrack state.
> > +
> > Actions list
> > ^^^^^^^^^^^^
> >
> > @@ -4925,6 +4927,39 @@ NVGRE encapsulation header and sent to port
> id 0.
> > testpmd> flow create 0 ingress transfer pattern eth / end
> actions
> > sample ratio 1 index 0 / port_id id 2 / end
> >
> > +Sample conntrack rules
> > +~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Conntrack rules can be set by the following commands
> > +
> > +Need to construct the connection context with provided
> information.
> > +In the first table, create a flow rule by using conntrack action
> and jump to
> > +the next table. In the next table, create a rule to check the
> state.
> > +
> > +::
> > +
> > + testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack
> 1 cack 0
> > + last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5
> last_win 510
> > + last_seq 2632987379 last_ack 2532480967 last_end
> 2632987379
> > + last_index 0x8
> > + testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
> > + sent_end 2632987379 reply_end 2633016339 max_win 28960
> > + max_ack 2632987379
> > + testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
> > + sent_end 2532480967 reply_end 2532546247 max_win 65280
> > + max_ack 2532480967
> > + testpmd> flow indirect_action 0 create ingress action conntrack
> / end
> > + testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp
> / end actions indirect 0 / jump group 5 / end
> > + testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp
> / conntrack is 1 / end actions queue index 5 / end
> > +
> > +Construct the conntrack again with only "is_orig" set to 0 (other
> fields are
> > +ignored), then use "update" interface to update the direction.
> Create flow
> s/use/use the
>
> > +rules like above for the peer port.
> By peer, do you mean peer system? Or remote/dst port of the TCP
> connection?
The peer port of the conntrack. One conntrack context should only be used for
a bi-dir traffic from to same ethdev port or between a pair of ethdev ports.
>
> > +
> > +::
> > +
> > + testpmd> flow indirect_action 0 update 0 action conntrack_update
> dir / end
> > +
> > BPF Functions
> > --------------
> >
> > --
> > 2.19.0.windows.1
> >
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] doc: update for conntrack
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 3/3] doc: update " Bing Zhao
@ 2021-04-19 17:32 ` Thomas Monjalon
2021-04-19 17:37 ` Ori Kam
1 sibling, 0 replies; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-19 17:32 UTC (permalink / raw)
To: Bing Zhao
Cc: orika, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
19/04/2021 19:16, Bing Zhao:
> The updated documentations include:
> 1. Release notes
> 2. rte_flow.rst
> 3. testpmd user guide
We need a v5 with doc squashed in previous patches.
Release notes should go with ethdev patch.
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -203,6 +203,10 @@ New Features
> the events across multiple stages.
> * This also reduced the scheduling overhead on a event device.
>
> +* **Added conntrack support for rte_flow.**
Suggested headline:
Added TCP connection tracking offload in flow API.
> +
> + * Added conntrack action and item for stateful offloading.
It should be moved above with other ethdev features.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: introduce conntrack flow action and item
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 1/3] " Bing Zhao
@ 2021-04-19 17:33 ` Ori Kam
0 siblings, 0 replies; 45+ messages in thread
From: Ori Kam @ 2021-04-19 17:33 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, April 19, 2021 8:17 PM
> Subject: [PATCH v4 1/3] ethdev: introduce conntrack flow action and item
>
> This commit introduces the conntrack action and item.
>
> Usually the HW offloading is stateless. For some stateful offloading
> like a TCP connection, HW module will help provide the ability of a
> full offloading w/o SW participation after the connection was
> established.
>
> The basic usage is that in the first flow rule the application should
> add the conntrack action and jump to the next flow table. In the
> following flow rule(s) of the next table, the application should use
> the conntrack item to match on the result.
>
> A TCP connection has two directions traffic. To set a conntrack
> action context correctly, the information of packets from both
> directions are required.
>
> The conntrack action should be created on one ethdev port and supply
> the peer ethdev port as a parameter to the action. After context
> created, it could only be used between these two ethdev ports
> (dual-port mode) or a single port. The application should modify the
> action via the API "rte_action_handle_update" only when before using
> it to create a flow rule with conntrack for the opposite direction.
> This will help the driver to recognize the direction of the flow to
> be created, especially in the single-port mode, in which case the
> traffic from both directions will go through the same ethdev port
> if the application works as an "forwarding engine" but not an end
> point. There is no need to call the update interface if the
> subsequent flow rules have nothing to be changed.
>
> Query will be supported via "rte_action_handle_query" interface,
> about the current packets information and connection status. The
> fields query capabilities depends on the HW.
>
> For the packets received during the conntrack setup, it is suggested
> to re-inject the packets in order to make sure the conntrack module
> works correctly without missing any packet. Only the valid packets
> should pass the conntrack, packets with invalid TCP information,
> like out of window, or with invalid header, like malformed, should
> not pass.
>
> Naming and definition:
> https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
> netfilter/nf_conntrack_tcp.h
> https://elixir.bootlin.com/linux/latest/source/net/netfilter/
> nf_conntrack_proto_tcp.c
>
> Other reference:
> https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack Bing Zhao
@ 2021-04-19 17:35 ` Ori Kam
0 siblings, 0 replies; 45+ messages in thread
From: Ori Kam @ 2021-04-19 17:35 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
Hi Bing,
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, April 19, 2021 8:17 PM
> Subject: [PATCH v4 2/3] app/testpmd: add CLI for conntrack
>
> The command line for testing connection tracking is added. To create
> a conntrack object, 3 parts are needed.
> set conntrack com peer ...
> set conntrack orig scale ...
> set conntrack rply scale ...
> This will create a full conntrack action structure for the indirect
> action. After the indirect action handle of "conntrack" created, it
> could be used in the flow creation. Before updating, the same
> structure is also needed together with the update command
> "conntrack_update" to update the "dir" or "ctx".
>
> After the flow with conntrack action created, the packet should jump
> to the next flow for the result checking with conntrack item. The
> state is defined with bits and a valid combination could be
> supported.
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] doc: update for conntrack
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 3/3] doc: update " Bing Zhao
2021-04-19 17:32 ` Thomas Monjalon
@ 2021-04-19 17:37 ` Ori Kam
1 sibling, 0 replies; 45+ messages in thread
From: Ori Kam @ 2021-04-19 17:37 UTC (permalink / raw)
To: Bing Zhao, NBU-Contact-Thomas Monjalon, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
Hi Bing
I think that this patch should be merged to the two previous patches.
Except this,
Acked-by: Ori Kam <orika@nvidia.com>
Thanks,
Ori
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, April 19, 2021 8:17 PM
> Subject: [PATCH v4 3/3] doc: update for conntrack
>
> The updated documentations include:
> 1. Release notes
> 2. rte_flow.rst
> 3. testpmd user guide
>
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 118 ++++++++++++++++++++
> doc/guides/rel_notes/release_21_05.rst | 4 +
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++++++
> 3 files changed, 157 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 4b54588995..caabc49143 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,14 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``CONNTRACK``
> +^^^^^^^^^^^^^^^^^^^
> +
> +Matches a conntrack state after conntrack action.
> +
> +- ``flags``: conntrack packet state flags.
> +- Default ``mask`` matches all state bits.
> +
> Actions
> ~~~~~~~
>
> @@ -2842,6 +2850,116 @@ for ``RTE_FLOW_FIELD_VALUE`` and
> ``RTE_FLOW_FIELD_POINTER`` respectively.
> | ``value`` | immediate value or a pointer to this value |
> +---------------+----------------------------------------------------------+
>
> +Action: ``CONNTRACK``
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +Create a conntrack (connection tracking) context with the provided
> information.
> +
> +In stateful session like TCP, the conntrack action provides the ability to
> +examine every packet of this connection and associate the state to every
> +packet. It will help to realize the stateful offload of connections with little
> +software participation. For example, the packets with invalid state may be
> +handled by the software. The control packets could be handled in the
> hardware.
> +The software just need to query the state of a connection when needed, and
> then
> +decide how to handle the flow rules and conntrack context.
> +
> +A conntrack context should be created via ``rte_flow_action_handle_create()``
> +before using. Then the handle with ``INDIRECT`` type is used for a flow rule
> +creation. If a flow rule with an opposite direction needs to be created, the
> +``rte_flow_action_handle_update()`` should be used to modify the direction.
> +
> +Not all the fields of the ``struct rte_flow_action_conntrack`` will be used
> +for a conntrack context creating, depending on the HW, and they should be
> +in host byte order. PMD should convert them into network byte order when
> +needed by the HW.
> +
> +The ``struct rte_flow_modify_conntrack`` should be used for an updating.
> +
> +The current conntrack context information could be queried via the
> +``rte_flow_action_handle_query()`` interface.
> +
> +.. _table_rte_flow_action_conntrack:
> +
> +.. table:: CONNTRACK
> +
> + +--------------------------+-------------------------------------------------------------+
> + | Field | Value |
> +
> +==========================+====================================
> =========================+
> + | ``peer_port`` | peer port number |
> + +--------------------------+-------------------------------------------------------------+
> + | ``is_original_dir`` | direction of this connection for creating flow rule
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``enable`` | enable the conntrack context |
> + +--------------------------+-------------------------------------------------------------+
> + | ``live_connection`` | one ack was seen for this connection
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``selective_ack`` | SACK enabled |
> + +--------------------------+-------------------------------------------------------------+
> + | ``challenge_ack_passed`` | a challenge ack has passed
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_direction`` | direction of the last passed packet |
> + +--------------------------+-------------------------------------------------------------+
> + | ``liberal_mode`` | only report state change |
> + +--------------------------+-------------------------------------------------------------+
> + | ``state`` | current state |
> + +--------------------------+-------------------------------------------------------------+
> + | ``max_ack_window`` | maximal window scaling factor
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``retransmission_limit`` | maximal retransmission times
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``original_dir`` | TCP parameters of the original direction
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``reply_dir`` | TCP parameters of the reply direction |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_window`` | window value of the last passed packet
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_seq`` | sequence value of the last passed packet |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_ack`` | acknowledgment value the last passed packet
> |
> + +--------------------------+-------------------------------------------------------------+
> + | ``last_end`` | sum of ack number and length of the last passed
> packet |
> + +--------------------------+-------------------------------------------------------------+
> +
> +.. _table_rte_flow_tcp_dir_param:
> +
> +.. table:: configuration parameters for each direction
> +
> + +---------------------+---------------------------------------------------------+
> + | Field | Value |
> +
> +=====================+=========================================
> ================+
> + | ``scale`` | TCP window scaling factor |
> + +---------------------+---------------------------------------------------------+
> + | ``close_initiated`` | FIN sent from this direction |
> + +---------------------+---------------------------------------------------------+
> + | ``last_ack_seen`` | an ACK packet received |
> + +---------------------+---------------------------------------------------------+
> + | ``data_unacked`` | unacknowledged data for packets from this direction
> |
> + +---------------------+---------------------------------------------------------+
> + | ``sent_end`` | max{seq + len} seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> + | ``reply_end`` | max{sack + max{win, 1}} seen in reply packets |
> + +---------------------+---------------------------------------------------------+
> + | ``max_win`` | max{max{win, 1}} + {sack - ack} seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> + | ``max_ack`` | max{ack} + seen in sent packets |
> + +---------------------+---------------------------------------------------------+
> +
> +.. _table_rte_flow_modify_conntrack:
> +
> +.. table:: update a conntrack context
> +
> + +----------------+-------------------------------------------------+
> + | Field | Value |
> +
> +================+==============================================
> ===+
> + | ``new_ct`` | new conntrack information |
> + +----------------+-------------------------------------------------+
> + | ``direction`` | direction will be updated |
> + +----------------+-------------------------------------------------+
> + | ``state`` | other fields except direction will be updated |
> + +----------------+-------------------------------------------------+
> + | ``reserved`` | reserved bits |
> + +----------------+-------------------------------------------------+
> +
> Negative types
> ~~~~~~~~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_21_05.rst
> b/doc/guides/rel_notes/release_21_05.rst
> index 8913dd4f9c..fb978aebe3 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -203,6 +203,10 @@ New Features
> the events across multiple stages.
> * This also reduced the scheduling overhead on a event device.
>
> +* **Added conntrack support for rte_flow.**
> +
> + * Added conntrack action and item for stateful offloading.
> +
> * **Updated testpmd.**
>
> * Added a command line option to configure forced speed for Ethernet port.
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 715e209fd2..efa32bb6ad 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -3789,6 +3789,8 @@ This section lists supported pattern items and their
> attributes, if any.
> - ``s_field {unsigned}``: S field.
> - ``seid {unsigned}``: session endpoint identifier.
>
> +- ``conntrack``: match conntrack state.
> +
> Actions list
> ^^^^^^^^^^^^
>
> @@ -4927,6 +4929,39 @@ NVGRE encapsulation header and sent to port id 0.
> testpmd> flow create 0 ingress transfer pattern eth / end actions
> sample ratio 1 index 0 / port_id id 2 / end
>
> +Sample conntrack rules
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +Conntrack rules can be set by the following commands
> +
> +Need to construct the connection context with provided information.
> +In the first table, create a flow rule by using conntrack action and jump to
> +the next table. In the next table, create a rule to check the state.
> +
> +::
> +
> + testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0
> + last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510
> + last_seq 2632987379 last_ack 2532480967 last_end 2632987379
> + last_index 0x8
> + testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
> + sent_end 2632987379 reply_end 2633016339 max_win 28960
> + max_ack 2632987379
> + testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
> + sent_end 2532480967 reply_end 2532546247 max_win 65280
> + max_ack 2532480967
> + testpmd> flow indirect_action 0 create ingress action conntrack / end
> + testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp / end actions
> indirect 0 / jump group 5 / end
> + testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack is
> 1 / end actions queue index 5 / end
> +
> +Construct the conntrack again with only "is_orig" set to 0 (other fields are
> +ignored), then use "update" interface to update the direction. Create flow
> +rules like above for the peer port.
> +
> +::
> +
> + testpmd> flow indirect_action 0 update 0 action conntrack_update dir / end
> +
> BPF Functions
> --------------
>
> --
> 2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
` (3 preceding siblings ...)
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-04-19 17:51 ` Bing Zhao
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 1/2] " Bing Zhao
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add CLI for conntrack Bing Zhao
4 siblings, 2 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:51 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
This patch set includes the conntrack action and item definitions as
well as the testpmd CLI proposal.
Documents of release notes and guides are also updated.
---
v2: add testpmd CLI proposal
v3: add doc update
v4: fix building and address comments for doc and header file
v5: squash doc update into ethdev and testpmd separately
---
Bing Zhao (2):
ethdev: introduce conntrack flow action and item
app/testpmd: add CLI for conntrack
app/test-pmd/cmdline.c | 355 ++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 +++++
app/test-pmd/config.c | 65 +++-
app/test-pmd/testpmd.h | 2 +
doc/guides/prog_guide/rte_flow.rst | 118 +++++++
doc/guides/rel_notes/release_21_05.rst | 7 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 212 ++++++++++++
9 files changed, 887 insertions(+), 1 deletion(-)
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v5 1/2] ethdev: introduce conntrack flow action and item
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item Bing Zhao
@ 2021-04-19 17:51 ` Bing Zhao
2021-04-19 18:07 ` Thomas Monjalon
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add CLI for conntrack Bing Zhao
1 sibling, 1 reply; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:51 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 118 ++++++++++++++
doc/guides/rel_notes/release_21_05.rst | 4 +
lib/librte_ethdev/rte_flow.c | 2 +
lib/librte_ethdev/rte_flow.h | 212 +++++++++++++++++++++++++
4 files changed, 336 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 4b54588995..5f6129f799 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,14 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^
+
+Matches a conntrack state after conntrack action.
+
+- ``flags``: conntrack packet state flags.
+- Default ``mask`` matches all state bits.
+
Actions
~~~~~~~
@@ -2842,6 +2850,116 @@ for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
| ``value`` | immediate value or a pointer to this value |
+---------------+----------------------------------------------------------+
+Action: ``CONNTRACK``
+^^^^^^^^^^^^^^^^^^^^^
+
+Create a conntrack (connection tracking) context with the provided information.
+
+In stateful session like TCP, the conntrack action provides the ability to
+examine every packet of this connection and associate the state to every
+packet. It will help to realize the stateful offload of connections with little
+software participation. For example, the packets with invalid state may be
+handled by the software. The control packets could be handled in the hardware.
+The software just need to query the state of a connection when needed, and then
+decide how to handle the flow rules and conntrack context.
+
+A conntrack context should be created via ``rte_flow_action_handle_create()``
+before using. Then the handle with ``INDIRECT`` type is used for a flow rule
+creation. If a flow rule with an opposite direction needs to be created, the
+``rte_flow_action_handle_update()`` should be used to modify the direction.
+
+Not all the fields of the ``struct rte_flow_action_conntrack`` will be used
+for a conntrack context creating, depending on the HW, and they should be
+in host byte order. PMD should convert them into network byte order when
+needed by the HW.
+
+The ``struct rte_flow_modify_conntrack`` should be used for an updating.
+
+The current conntrack context information could be queried via the
+``rte_flow_action_handle_query()`` interface.
+
+.. _table_rte_flow_action_conntrack:
+
+.. table:: CONNTRACK
+
+ +--------------------------+-------------------------------------------------------------+
+ | Field | Value |
+ +==========================+=============================================================+
+ | ``peer_port`` | peer port number |
+ +--------------------------+-------------------------------------------------------------+
+ | ``is_original_dir`` | direction of this connection for creating flow rule |
+ +--------------------------+-------------------------------------------------------------+
+ | ``enable`` | enable the conntrack context |
+ +--------------------------+-------------------------------------------------------------+
+ | ``live_connection`` | one ack was seen for this connection |
+ +--------------------------+-------------------------------------------------------------+
+ | ``selective_ack`` | SACK enabled |
+ +--------------------------+-------------------------------------------------------------+
+ | ``challenge_ack_passed`` | a challenge ack has passed |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_direction`` | direction of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``liberal_mode`` | only report state change |
+ +--------------------------+-------------------------------------------------------------+
+ | ``state`` | current state |
+ +--------------------------+-------------------------------------------------------------+
+ | ``max_ack_window`` | maximal window scaling factor |
+ +--------------------------+-------------------------------------------------------------+
+ | ``retransmission_limit`` | maximal retransmission times |
+ +--------------------------+-------------------------------------------------------------+
+ | ``original_dir`` | TCP parameters of the original direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``reply_dir`` | TCP parameters of the reply direction |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_window`` | window size of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_seq`` | sequence number of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_ack`` | acknowledgment number the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+ | ``last_end`` | sum of ack number and length of the last passed packet |
+ +--------------------------+-------------------------------------------------------------+
+
+.. _table_rte_flow_tcp_dir_param:
+
+.. table:: configuration parameters for each direction
+
+ +---------------------+---------------------------------------------------------+
+ | Field | Value |
+ +=====================+=========================================================+
+ | ``scale`` | TCP window scaling factor |
+ +---------------------+---------------------------------------------------------+
+ | ``close_initiated`` | FIN sent from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``last_ack_seen`` | an ACK packet received |
+ +---------------------+---------------------------------------------------------+
+ | ``data_unacked`` | unacknowledged data for packets from this direction |
+ +---------------------+---------------------------------------------------------+
+ | ``sent_end`` | max{seq + len} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``reply_end`` | max{sack + max{win, 1}} seen in reply packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_win`` | max{max{win, 1}} + {sack - ack} seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+ | ``max_ack`` | max{ack} + seen in sent packets |
+ +---------------------+---------------------------------------------------------+
+
+.. _table_rte_flow_modify_conntrack:
+
+.. table:: update a conntrack context
+
+ +----------------+-------------------------------------------------+
+ | Field | Value |
+ +================+=================================================+
+ | ``new_ct`` | new conntrack information |
+ +----------------+-------------------------------------------------+
+ | ``direction`` | direction will be updated |
+ +----------------+-------------------------------------------------+
+ | ``state`` | other fields except direction will be updated |
+ +----------------+-------------------------------------------------+
+ | ``reserved`` | reserved bits |
+ +----------------+-------------------------------------------------+
+
Negative types
~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 8913dd4f9c..a5e2a8e503 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -87,6 +87,10 @@ New Features
to support metering traffic by packet per second (PPS),
in addition to the initial bytes per second (BPS) mode (value 0).
+* **Added TCP connection tracking offload in flow API.**
+
+ * Added conntrack item and action for stateful connection offload.
+
* **Updated Arkville PMD driver.**
Updated Arkville net driver with new features and improvements, including:
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 0d2610b7c4..c7c7108933 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+ MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
};
/** Generate flow_action[] entry. */
@@ -186,6 +187,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
* indirect action handle.
*/
MK_FLOW_ACTION(INDIRECT, 0),
+ MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
};
int
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 0447d36002..dae16b3433 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -30,6 +30,7 @@
#include <rte_esp.h>
#include <rte_higig.h>
#include <rte_ecpri.h>
+#include <rte_bitops.h>
#include <rte_mbuf.h>
#include <rte_mbuf_dyn.h>
@@ -551,6 +552,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches conntrack state.
+ *
+ * @see struct rte_flow_item_conntrack.
+ */
+ RTE_FLOW_ITEM_TYPE_CONNTRACK,
};
/**
@@ -1685,6 +1695,51 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+/**
+ * The packet is valid after conntrack checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_VALID RTE_BIT32(0)
+/**
+ * The state of the connection is changed.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED RTE_BIT32(1)
+/**
+ * Error is detected on this packet for this connection and
+ * an invalid state is set.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_INVALID RTE_BIT32(2)
+/**
+ * The HW connection tracking module is disabled.
+ * It can be due to application command or an invalid state.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED RTE_BIT32(3)
+/**
+ * The packet contains some bad field(s) and cannot continue
+ * with the conntrack module checking.
+ */
+#define RTE_FLOW_CONNTRACK_PKT_STATE_BAD RTE_BIT32(4)
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_CONNTRACK
+ *
+ * Matches the state of a packet after it passed the connection tracking
+ * examination. The state is a bitmap of one RTE_FLOW_CONNTRACK_PKT_STATE*
+ * or a reasonable combination of these bits.
+ */
+struct rte_flow_item_conntrack {
+ uint32_t flags;
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_CONNTRACK. */
+#ifndef __cplusplus
+static const struct rte_flow_item_conntrack rte_flow_item_conntrack_mask = {
+ .flags = 0xffffffff,
+};
+#endif
+
/**
* Matching pattern item definition.
*
@@ -2278,6 +2333,15 @@ enum rte_flow_action_type {
* or different ethdev ports.
*/
RTE_FLOW_ACTION_TYPE_INDIRECT,
+
+ /**
+ * [META]
+ *
+ * Enable tracking a TCP connection state.
+ *
+ * @see struct rte_flow_action_conntrack.
+ */
+ RTE_FLOW_ACTION_TYPE_CONNTRACK,
};
/**
@@ -2876,6 +2940,154 @@ struct rte_flow_action_set_dscp {
*/
struct rte_flow_action_handle;
+/**
+ * The state of a TCP connection.
+ */
+enum rte_flow_conntrack_state {
+ /** SYN-ACK packet was seen. */
+ RTE_FLOW_CONNTRACK_STATE_SYN_RECV,
+ /** 3-way handshake was done. */
+ RTE_FLOW_CONNTRACK_STATE_ESTABLISHED,
+ /** First FIN packet was received to close the connection. */
+ RTE_FLOW_CONNTRACK_STATE_FIN_WAIT,
+ /** First FIN was ACKed. */
+ RTE_FLOW_CONNTRACK_STATE_CLOSE_WAIT,
+ /** Second FIN was received, waiting for the last ACK. */
+ RTE_FLOW_CONNTRACK_STATE_LAST_ACK,
+ /** Second FIN was ACKed, connection was closed. */
+ RTE_FLOW_CONNTRACK_STATE_TIME_WAIT,
+};
+
+/**
+ * The last passed TCP packet flags of a connection.
+ */
+enum rte_flow_conntrack_tcp_last_index {
+ RTE_FLOW_CONNTRACK_FLAG_NONE = 0, /**< No Flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYN = RTE_BIT32(0), /**< With SYN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_SYNACK = RTE_BIT32(1), /**< With SYNACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_FIN = RTE_BIT32(2), /**< With FIN flag. */
+ RTE_FLOW_CONNTRACK_FLAG_ACK = RTE_BIT32(3), /**< With ACK flag. */
+ RTE_FLOW_CONNTRACK_FLAG_RST = RTE_BIT32(4), /**< With RST flag. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * Configuration parameters for each direction of a TCP connection.
+ * All fields should be in host byte order.
+ * If needed, driver should convert all fields to network byte order
+ * if HW needs them in that way.
+ */
+struct rte_flow_tcp_dir_param {
+ /** TCP window scaling factor, 0xF to disable. */
+ uint32_t scale:4;
+ /** The FIN was sent by this direction. */
+ uint32_t close_initiated:1;
+ /** An ACK packet has been received by this side. */
+ uint32_t last_ack_seen:1;
+ /**
+ * If set, it indicates that there is unacknowledged data for the
+ * packets sent from this direction.
+ */
+ uint32_t data_unacked:1;
+ /**
+ * Maximal value of sequence + payload length in sent
+ * packets (next ACK from the opposite direction).
+ */
+ uint32_t sent_end;
+ /**
+ * Maximal value of (ACK + window size) in received packet + length
+ * over sent packet (maximal sequence could be sent).
+ */
+ uint32_t reply_end;
+ /** Maximal value of actual window size in sent packets. */
+ uint32_t max_win;
+ /** Maximal value of ACK in sent packets. */
+ uint32_t max_ack;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Configuration and initial state for the connection tracking module.
+ * This structure could be used for both setting and query.
+ * All fields should be in host byte order.
+ */
+struct rte_flow_action_conntrack {
+ /** The peer port number, can be the same port. */
+ uint16_t peer_port;
+ /**
+ * Direction of this connection when creating a flow rule, the
+ * value only affects the creation of subsequent flow rules.
+ */
+ uint32_t is_original_dir:1;
+ /**
+ * Enable / disable the conntrack HW module. When disabled, the
+ * result will always be RTE_FLOW_CONNTRACK_FLAG_DISABLED.
+ * In this state the HW will act as passthrough.
+ * It only affects this conntrack object in the HW without any effect
+ * to the other objects.
+ */
+ uint32_t enable:1;
+ /** At least one ack was seen after the connection was established. */
+ uint32_t live_connection:1;
+ /** Enable selective ACK on this connection. */
+ uint32_t selective_ack:1;
+ /** A challenge ack has passed. */
+ uint32_t challenge_ack_passed:1;
+ /**
+ * 1: The last packet is seen from the original direction.
+ * 0: The last packet is seen from the reply direction.
+ */
+ uint32_t last_direction:1;
+ /** No TCP check will be done except the state change. */
+ uint32_t liberal_mode:1;
+ /**<The current state of this connection. */
+ enum rte_flow_conntrack_state state;
+ /** Scaling factor for maximal allowed ACK window. */
+ uint8_t max_ack_window;
+ /** Maximal allowed number of retransmission times. */
+ uint8_t retransmission_limit;
+ /** TCP parameters of the original direction. */
+ struct rte_flow_tcp_dir_param original_dir;
+ /** TCP parameters of the reply direction. */
+ struct rte_flow_tcp_dir_param reply_dir;
+ /** The window value of the last packet passed this conntrack. */
+ uint16_t last_window;
+ enum rte_flow_conntrack_tcp_last_index last_index;
+ /** The sequence of the last packet passed this conntrack. */
+ uint32_t last_seq;
+ /** The acknowledgment of the last packet passed this conntrack. */
+ uint32_t last_ack;
+ /**
+ * The total value ACK + payload length of the last packet
+ * passed this conntrack.
+ */
+ uint32_t last_end;
+};
+
+/**
+ * RTE_FLOW_ACTION_TYPE_CONNTRACK
+ *
+ * Wrapper structure for the context update interface.
+ * Ports cannot support updating, and the only valid solution is to
+ * destroy the old context and create a new one instead.
+ */
+struct rte_flow_modify_conntrack {
+ /** New connection tracking parameters to be updated. */
+ struct rte_flow_action_conntrack new_ct;
+ /** The direction field will be updated. */
+ uint32_t direction:1;
+ /** All the other fields except direction will be updated. */
+ uint32_t state:1;
+ /** Reserved bits for the future usage. */
+ uint32_t reserved:30;
+};
+
/**
* Field IDs for MODIFY_FIELD action.
*/
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v5 2/2] app/testpmd: add CLI for conntrack
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 1/2] " Bing Zhao
@ 2021-04-19 17:51 ` Bing Zhao
1 sibling, 0 replies; 45+ messages in thread
From: Bing Zhao @ 2021-04-19 17:51 UTC (permalink / raw)
To: orika, thomas, ferruh.yigit, andrew.rybchenko
Cc: dev, ajit.khaparde, xiaoyun.li
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/cmdline.c | 355 ++++++++++++++++++++
app/test-pmd/cmdline_flow.c | 92 +++++
app/test-pmd/config.c | 65 +++-
app/test-pmd/testpmd.h | 2 +
doc/guides/rel_notes/release_21_05.rst | 3 +
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 35 ++
6 files changed, 551 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4d9e038ce8..d282c7cad6 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13621,6 +13621,359 @@ cmdline_parse_inst_t cmd_set_mplsoudp_decap_with_vlan = {
},
};
+/** Set connection tracking object common details */
+struct cmd_set_conntrack_common_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t common;
+ cmdline_fixed_string_t peer;
+ cmdline_fixed_string_t is_orig;
+ cmdline_fixed_string_t enable;
+ cmdline_fixed_string_t live;
+ cmdline_fixed_string_t sack;
+ cmdline_fixed_string_t cack;
+ cmdline_fixed_string_t last_dir;
+ cmdline_fixed_string_t liberal;
+ cmdline_fixed_string_t state;
+ cmdline_fixed_string_t max_ack_win;
+ cmdline_fixed_string_t retrans;
+ cmdline_fixed_string_t last_win;
+ cmdline_fixed_string_t last_seq;
+ cmdline_fixed_string_t last_ack;
+ cmdline_fixed_string_t last_end;
+ cmdline_fixed_string_t last_index;
+ uint8_t stat;
+ uint8_t factor;
+ uint16_t peer_port;
+ uint32_t is_original;
+ uint32_t en;
+ uint32_t is_live;
+ uint32_t s_ack;
+ uint32_t c_ack;
+ uint32_t ld;
+ uint32_t lb;
+ uint8_t re_num;
+ uint8_t li;
+ uint16_t lw;
+ uint32_t ls;
+ uint32_t la;
+ uint32_t le;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_common_com =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ common, "com");
+cmdline_parse_token_string_t cmd_set_conntrack_common_peer =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer, "peer");
+cmdline_parse_token_num_t cmd_set_conntrack_common_peer_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ peer_port, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_is_orig =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_orig, "is_orig");
+cmdline_parse_token_num_t cmd_set_conntrack_common_is_orig_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_original, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_enable =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ enable, "enable");
+cmdline_parse_token_num_t cmd_set_conntrack_common_enable_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ en, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_live =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ live, "live");
+cmdline_parse_token_num_t cmd_set_conntrack_common_live_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ is_live, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_sack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ sack, "sack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_sack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ s_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_cack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ cack, "cack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_cack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ c_ack, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_dir, "last_dir");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_dir_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ld, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_liberal =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ liberal, "liberal");
+cmdline_parse_token_num_t cmd_set_conntrack_common_liberal_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lb, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_state =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ state, "state");
+cmdline_parse_token_num_t cmd_set_conntrack_common_state_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ stat, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_max_ackwin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ max_ack_win, "max_ack_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_max_ackwin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ factor, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_retrans =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ retrans, "r_lim");
+cmdline_parse_token_num_t cmd_set_conntrack_common_retrans_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ re_num, RTE_UINT8);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_win, "last_win");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ lw, RTE_UINT16);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_seq =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_seq, "last_seq");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_seq_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ ls, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_ack, "last_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ la, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_end, "last_end");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ le, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_common_last_index =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_common_result,
+ last_index, "last_index");
+cmdline_parse_token_num_t cmd_set_conntrack_common_last_index_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_common_result,
+ li, RTE_UINT8);
+
+static void cmd_set_conntrack_common_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_common_result *res = parsed_result;
+
+ /* No need to swap to big endian. */
+ conntrack_context.peer_port = res->peer_port;
+ conntrack_context.is_original_dir = res->is_original;
+ conntrack_context.enable = res->en;
+ conntrack_context.live_connection = res->is_live;
+ conntrack_context.selective_ack = res->s_ack;
+ conntrack_context.challenge_ack_passed = res->c_ack;
+ conntrack_context.last_direction = res->ld;
+ conntrack_context.liberal_mode = res->lb;
+ conntrack_context.state = (enum rte_flow_conntrack_state)res->stat;
+ conntrack_context.max_ack_window = res->factor;
+ conntrack_context.retransmission_limit = res->re_num;
+ conntrack_context.last_window = res->lw;
+ conntrack_context.last_index =
+ (enum rte_flow_conntrack_tcp_last_index)res->li;
+ conntrack_context.last_seq = res->ls;
+ conntrack_context.last_ack = res->la;
+ conntrack_context.last_end = res->le;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_common = {
+ .f = cmd_set_conntrack_common_parsed,
+ .data = NULL,
+ .help_str = "set conntrack com peer <port_id> is_orig <dir> enable <en>"
+ " live <ack_seen> sack <en> cack <passed> last_dir <dir>"
+ " liberal <en> state <s> max_ack_win <factor> r_lim <num>"
+ " last_win <win> last_seq <seq> last_ack <ack> last_end <end>"
+ " last_index <flag>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_common_com,
+ (void *)&cmd_set_conntrack_common_peer,
+ (void *)&cmd_set_conntrack_common_peer_value,
+ (void *)&cmd_set_conntrack_common_is_orig,
+ (void *)&cmd_set_conntrack_common_is_orig_value,
+ (void *)&cmd_set_conntrack_common_enable,
+ (void *)&cmd_set_conntrack_common_enable_value,
+ (void *)&cmd_set_conntrack_common_live,
+ (void *)&cmd_set_conntrack_common_live_value,
+ (void *)&cmd_set_conntrack_common_sack,
+ (void *)&cmd_set_conntrack_common_sack_value,
+ (void *)&cmd_set_conntrack_common_cack,
+ (void *)&cmd_set_conntrack_common_cack_value,
+ (void *)&cmd_set_conntrack_common_last_dir,
+ (void *)&cmd_set_conntrack_common_last_dir_value,
+ (void *)&cmd_set_conntrack_common_liberal,
+ (void *)&cmd_set_conntrack_common_liberal_value,
+ (void *)&cmd_set_conntrack_common_state,
+ (void *)&cmd_set_conntrack_common_state_value,
+ (void *)&cmd_set_conntrack_common_max_ackwin,
+ (void *)&cmd_set_conntrack_common_max_ackwin_value,
+ (void *)&cmd_set_conntrack_common_retrans,
+ (void *)&cmd_set_conntrack_common_retrans_value,
+ (void *)&cmd_set_conntrack_common_last_win,
+ (void *)&cmd_set_conntrack_common_last_win_value,
+ (void *)&cmd_set_conntrack_common_last_seq,
+ (void *)&cmd_set_conntrack_common_last_seq_value,
+ (void *)&cmd_set_conntrack_common_last_ack,
+ (void *)&cmd_set_conntrack_common_last_ack_value,
+ (void *)&cmd_set_conntrack_common_last_end,
+ (void *)&cmd_set_conntrack_common_last_end_value,
+ (void *)&cmd_set_conntrack_common_last_index,
+ (void *)&cmd_set_conntrack_common_last_index_value,
+ NULL,
+ },
+};
+
+/** Set connection tracking object both directions' details */
+struct cmd_set_conntrack_dir_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t conntrack;
+ cmdline_fixed_string_t dir;
+ cmdline_fixed_string_t scale;
+ cmdline_fixed_string_t fin;
+ cmdline_fixed_string_t ack_seen;
+ cmdline_fixed_string_t unack;
+ cmdline_fixed_string_t sent_end;
+ cmdline_fixed_string_t reply_end;
+ cmdline_fixed_string_t max_win;
+ cmdline_fixed_string_t max_ack;
+ uint32_t factor;
+ uint32_t f;
+ uint32_t as;
+ uint32_t un;
+ uint32_t se;
+ uint32_t re;
+ uint32_t mw;
+ uint32_t ma;
+};
+
+cmdline_parse_token_string_t cmd_set_conntrack_dir_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ set, "set");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_conntrack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ conntrack, "conntrack");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_dir =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ dir, "orig#rply");
+cmdline_parse_token_string_t cmd_set_conntrack_dir_scale =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ scale, "scale");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_scale_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ factor, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_fin =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ fin, "fin");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_fin_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ f, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ack_seen, "acked");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ as, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_unack_data =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ unack, "unack_data");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_unack_data_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ un, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_sent_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ sent_end, "sent_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_sent_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ se, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_reply_end =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ reply_end, "reply_end");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_reply_end_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ re, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_win =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_win, "max_win");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_win_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ mw, RTE_UINT32);
+cmdline_parse_token_string_t cmd_set_conntrack_dir_max_ack =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ max_ack, "max_ack");
+cmdline_parse_token_num_t cmd_set_conntrack_dir_max_ack_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_conntrack_dir_result,
+ ma, RTE_UINT32);
+
+static void cmd_set_conntrack_dir_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_set_conntrack_dir_result *res = parsed_result;
+ struct rte_flow_tcp_dir_param *dir = NULL;
+
+ if (strcmp(res->dir, "orig") == 0)
+ dir = &conntrack_context.original_dir;
+ else if (strcmp(res->dir, "rply") == 0)
+ dir = &conntrack_context.reply_dir;
+ else
+ return;
+ dir->scale = res->factor;
+ dir->close_initiated = res->f;
+ dir->last_ack_seen = res->as;
+ dir->data_unacked = res->un;
+ dir->sent_end = res->se;
+ dir->reply_end = res->re;
+ dir->max_ack = res->ma;
+ dir->max_win = res->mw;
+}
+
+cmdline_parse_inst_t cmd_set_conntrack_dir = {
+ .f = cmd_set_conntrack_dir_parsed,
+ .data = NULL,
+ .help_str = "set conntrack orig|rply scale <factor> fin <sent>"
+ " acked <seen> unack_data <unack> sent_end <sent>"
+ " reply_end <reply> max_win <win> max_ack <ack>",
+ .tokens = {
+ (void *)&cmd_set_conntrack_set,
+ (void *)&cmd_set_conntrack_conntrack,
+ (void *)&cmd_set_conntrack_dir_dir,
+ (void *)&cmd_set_conntrack_dir_scale,
+ (void *)&cmd_set_conntrack_dir_scale_value,
+ (void *)&cmd_set_conntrack_dir_fin,
+ (void *)&cmd_set_conntrack_dir_fin_value,
+ (void *)&cmd_set_conntrack_dir_ack,
+ (void *)&cmd_set_conntrack_dir_ack_value,
+ (void *)&cmd_set_conntrack_dir_unack_data,
+ (void *)&cmd_set_conntrack_dir_unack_data_value,
+ (void *)&cmd_set_conntrack_dir_sent_end,
+ (void *)&cmd_set_conntrack_dir_sent_end_value,
+ (void *)&cmd_set_conntrack_dir_reply_end,
+ (void *)&cmd_set_conntrack_dir_reply_end_value,
+ (void *)&cmd_set_conntrack_dir_max_win,
+ (void *)&cmd_set_conntrack_dir_max_win_value,
+ (void *)&cmd_set_conntrack_dir_max_ack,
+ (void *)&cmd_set_conntrack_dir_max_ack_value,
+ NULL,
+ },
+};
+
/* Strict link priority scheduling mode setting */
static void
cmd_strict_link_prio_parsed(
@@ -17120,6 +17473,8 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
(cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
+ (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
(cmdline_parse_inst_t *)&cmd_ddp_add,
(cmdline_parse_inst_t *)&cmd_ddp_del,
(cmdline_parse_inst_t *)&cmd_ddp_get_list,
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c5381c638b..e2b09cf16d 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,7 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_CONNTRACK,
/* Validate/create actions. */
ACTIONS,
@@ -431,6 +432,10 @@ enum index {
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_WIDTH,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -569,6 +574,8 @@ struct mplsoudp_encap_conf mplsoudp_encap_conf;
struct mplsoudp_decap_conf mplsoudp_decap_conf;
+struct rte_flow_action_conntrack conntrack_context;
+
#define ACTION_SAMPLE_ACTIONS_NUM 10
#define RAW_SAMPLE_CONFS_MAX_NUM 8
/** Storage for struct rte_flow_action_sample including external data. */
@@ -968,6 +975,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_CONNTRACK,
END_SET,
ZERO,
};
@@ -1382,6 +1390,8 @@ static const enum index next_action[] = {
ACTION_SAMPLE,
ACTION_INDIRECT,
ACTION_MODIFY_FIELD,
+ ACTION_CONNTRACK,
+ ACTION_CONNTRACK_UPDATE,
ZERO,
};
@@ -1650,6 +1660,13 @@ static const enum index action_modify_field_src[] = {
ZERO,
};
+static const enum index action_update_conntrack[] = {
+ ACTION_CONNTRACK_UPDATE_DIR,
+ ACTION_CONNTRACK_UPDATE_CTX,
+ ACTION_NEXT,
+ ZERO,
+};
+
static int parse_set_raw_encap_decap(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1740,6 +1757,10 @@ static int
parse_vc_modify_field_id(struct context *ctx, const struct token *token,
const char *str, unsigned int len, void *buf,
unsigned int size);
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size);
static int parse_destroy(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -3400,6 +3421,13 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "conntrack state",
+ .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
@@ -4498,6 +4526,34 @@ static const struct token token_list[] = {
.call = parse_vc_action_sample_index,
.comp = comp_set_sample_index,
},
+ [ACTION_CONNTRACK] = {
+ .name = "conntrack",
+ .help = "create a conntrack object",
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_action_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE] = {
+ .name = "conntrack_update",
+ .help = "update a conntrack object",
+ .next = NEXT(action_update_conntrack),
+ .priv = PRIV_ACTION(CONNTRACK,
+ sizeof(struct rte_flow_modify_conntrack)),
+ .call = parse_vc,
+ },
+ [ACTION_CONNTRACK_UPDATE_DIR] = {
+ .name = "dir",
+ .help = "update a conntrack object direction",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
+ [ACTION_CONNTRACK_UPDATE_CTX] = {
+ .name = "ctx",
+ .help = "update a conntrack object context",
+ .next = NEXT(action_update_conntrack),
+ .call = parse_vc_action_conntrack_update,
+ },
/* Indirect action destroy arguments. */
[INDIRECT_ACTION_DESTROY_ID] = {
.name = "action_id",
@@ -6304,6 +6360,42 @@ parse_vc_modify_field_id(struct context *ctx, const struct token *token,
return len;
}
+/** Parse the conntrack update, not a rte_flow_action. */
+static int
+parse_vc_action_conntrack_update(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len, void *buf,
+ unsigned int size)
+{
+ struct buffer *out = buf;
+ struct rte_flow_modify_conntrack *ct_modify = NULL;
+
+ (void)size;
+ if (ctx->curr != ACTION_CONNTRACK_UPDATE_CTX &&
+ ctx->curr != ACTION_CONNTRACK_UPDATE_DIR)
+ return -1;
+ /* Token name must match. */
+ if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+ return -1;
+ ct_modify = (struct rte_flow_modify_conntrack *)out->args.vc.data;
+ /* Nothing else to do if there is no buffer. */
+ if (!out)
+ return len;
+ if (ctx->curr == ACTION_CONNTRACK_UPDATE_DIR) {
+ ct_modify->new_ct.is_original_dir =
+ conntrack_context.is_original_dir;
+ ct_modify->direction = 1;
+ } else {
+ uint32_t old_dir;
+
+ old_dir = ct_modify->new_ct.is_original_dir;
+ memcpy(&ct_modify->new_ct, &conntrack_context,
+ sizeof(conntrack_context));
+ ct_modify->new_ct.is_original_dir = old_dir;
+ ct_modify->state = 1;
+ }
+ return len;
+}
+
/** Parse tokens for destroy command. */
static int
parse_destroy(struct context *ctx, const struct token *token,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 868ff3469b..787d45afbd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1483,6 +1483,11 @@ port_action_handle_create(portid_t port_id, uint32_t id,
pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
age->context = &pia->age_type;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) {
+ struct rte_flow_action_conntrack *ct =
+ (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf);
+
+ memcpy(ct, &conntrack_context, sizeof(*ct));
}
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x22, sizeof(error));
@@ -1564,11 +1569,24 @@ port_action_handle_update(portid_t port_id, uint32_t id,
{
struct rte_flow_error error;
struct rte_flow_action_handle *action_handle;
+ struct port_indirect_action *pia;
+ const void *update;
action_handle = port_action_handle_get_by_id(port_id, id);
if (!action_handle)
return -EINVAL;
- if (rte_flow_action_handle_update(port_id, action_handle, action,
+ pia = action_get_by_id(port_id, id);
+ if (!pia)
+ return -EINVAL;
+ switch (pia->type) {
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ update = action->conf;
+ break;
+ default:
+ update = action;
+ break;
+ }
+ if (rte_flow_action_handle_update(port_id, action_handle, update,
&error)) {
return port_flow_complain(&error);
}
@@ -1621,6 +1639,51 @@ port_action_handle_query(portid_t port_id, uint32_t id)
}
data = NULL;
break;
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ if (!ret) {
+ struct rte_flow_action_conntrack *ct = data;
+
+ printf("Conntrack Context:\n"
+ " Peer: %u, Flow dir: %s, Enable: %u\n"
+ " Live: %u, SACK: %u, CACK: %u\n"
+ " Packet dir: %s, Liberal: %u, State: %u\n"
+ " Factor: %u, Retrans: %u, TCP flags: %u\n"
+ " Last Seq: %u, Last ACK: %u\n"
+ " Last Win: %u, Last End: %u\n",
+ ct->peer_port,
+ ct->is_original_dir ? "Original" : "Reply",
+ ct->enable, ct->live_connection,
+ ct->selective_ack, ct->challenge_ack_passed,
+ ct->last_direction ? "Original" : "Reply",
+ ct->liberal_mode, ct->state,
+ ct->max_ack_window, ct->retransmission_limit,
+ ct->last_index, ct->last_seq, ct->last_ack,
+ ct->last_window, ct->last_end);
+ printf(" Original Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->original_dir.scale,
+ ct->original_dir.close_initiated,
+ ct->original_dir.last_ack_seen,
+ ct->original_dir.data_unacked,
+ ct->original_dir.sent_end,
+ ct->original_dir.reply_end,
+ ct->original_dir.max_win,
+ ct->original_dir.max_ack);
+ printf(" Reply Dir:\n"
+ " scale: %u, fin: %u, ack seen: %u\n"
+ " unacked data: %u\n Sent end: %u,"
+ " Reply end: %u, Max win: %u, Max ACK: %u\n",
+ ct->reply_dir.scale,
+ ct->reply_dir.close_initiated,
+ ct->reply_dir.last_ack_seen,
+ ct->reply_dir.data_unacked,
+ ct->reply_dir.sent_end, ct->reply_dir.reply_end,
+ ct->reply_dir.max_win, ct->reply_dir.max_ack);
+ }
+ data = NULL;
+ break;
default:
printf("Indirect action %u (type: %d) on port %u doesn't"
" support query\n", id, pia->type, port_id);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c314b30f2e..9530ec5fe0 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -630,6 +630,8 @@ extern struct mplsoudp_decap_conf mplsoudp_decap_conf;
extern enum rte_eth_rx_mq_mode rx_mq_mode;
+extern struct rte_flow_action_conntrack conntrack_context;
+
static inline unsigned int
lcore_num(void)
{
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index a5e2a8e503..d06ddb2074 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -215,6 +215,9 @@ New Features
``show port (port_id) rxq (queue_id) desc used count``
* Added command to dump internal representation information of single flow.
``flow dump (port_id) rule (rule_id)``
+ * Added commands to construct conntrack context and relevant indirect
+ action handle creation, update for conntrack action as well as conntrack
+ item matching.
* **Updated ipsec-secgw sample application.**
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 715e209fd2..efa32bb6ad 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3789,6 +3789,8 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``conntrack``: match conntrack state.
+
Actions list
^^^^^^^^^^^^
@@ -4927,6 +4929,39 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample conntrack rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Conntrack rules can be set by the following commands
+
+Need to construct the connection context with provided information.
+In the first table, create a flow rule by using conntrack action and jump to
+the next table. In the next table, create a rule to check the state.
+
+::
+
+ testpmd> set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0
+ last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510
+ last_seq 2632987379 last_ack 2532480967 last_end 2632987379
+ last_index 0x8
+ testpmd> set conntrack orig scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2632987379 reply_end 2633016339 max_win 28960
+ max_ack 2632987379
+ testpmd> set conntrack rply scale 7 fin 0 acked 1 unack_data 0
+ sent_end 2532480967 reply_end 2532546247 max_win 65280
+ max_ack 2532480967
+ testpmd> flow indirect_action 0 create ingress action conntrack / end
+ testpmd> flow create 0 group 3 ingress pattern eth / ipv4 / tcp / end actions indirect 0 / jump group 5 / end
+ testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack is 1 / end actions queue index 5 / end
+
+Construct the conntrack again with only "is_orig" set to 0 (other fields are
+ignored), then use "update" interface to update the direction. Create flow
+rules like above for the peer port.
+
+::
+
+ testpmd> flow indirect_action 0 update 0 action conntrack_update dir / end
+
BPF Functions
--------------
--
2.19.0.windows.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: introduce conntrack flow action and item
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 1/2] " Bing Zhao
@ 2021-04-19 18:07 ` Thomas Monjalon
2021-04-19 23:29 ` Ferruh Yigit
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2021-04-19 18:07 UTC (permalink / raw)
To: Bing Zhao
Cc: orika, ferruh.yigit, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
19/04/2021 19:51, Bing Zhao:
> This commit introduces the conntrack action and item.
[...]
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
It makes me think we should work together to simplify the whole
rte_flow guide and makes it clearer.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: introduce conntrack flow action and item
2021-04-19 18:07 ` Thomas Monjalon
@ 2021-04-19 23:29 ` Ferruh Yigit
0 siblings, 0 replies; 45+ messages in thread
From: Ferruh Yigit @ 2021-04-19 23:29 UTC (permalink / raw)
To: Thomas Monjalon, Bing Zhao
Cc: orika, andrew.rybchenko, dev, ajit.khaparde, xiaoyun.li
On 4/19/2021 7:07 PM, Thomas Monjalon wrote:
> 19/04/2021 19:51, Bing Zhao:
>> This commit introduces the conntrack action and item.
> [...]
>> Signed-off-by: Bing Zhao <bingz@nvidia.com>
>> Acked-by: Ori Kam <orika@nvidia.com>
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 45+ messages in thread
end of thread, other threads:[~2021-04-19 23:30 UTC | newest]
Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-18 7:30 [dpdk-dev] [RFC] ethdev: introduce conntrack flow action and item Bing Zhao
2021-03-22 15:16 ` Andrew Rybchenko
2021-04-07 7:43 ` Bing Zhao
2021-03-23 23:27 ` Ajit Khaparde
2021-04-07 2:41 ` Bing Zhao
2021-04-10 13:46 ` [dpdk-dev] [PATCH] " Bing Zhao
2021-04-15 16:24 ` Ori Kam
2021-04-15 16:44 ` Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 0/2] " Bing Zhao
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 1/2] " Bing Zhao
2021-04-16 10:49 ` Thomas Monjalon
2021-04-16 18:18 ` Bing Zhao
2021-04-16 12:41 ` Ori Kam
2021-04-16 18:05 ` Bing Zhao
2021-04-16 21:47 ` Ajit Khaparde
2021-04-17 6:10 ` Bing Zhao
2021-04-17 14:54 ` Ajit Khaparde
2021-04-15 16:41 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-16 8:46 ` Ori Kam
2021-04-16 18:20 ` Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 1/3] " Bing Zhao
2021-04-16 18:30 ` Ajit Khaparde
2021-04-19 14:08 ` Thomas Monjalon
2021-04-19 16:21 ` Bing Zhao
2021-04-19 14:06 ` Thomas Monjalon
2021-04-19 16:13 ` Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 2/3] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-16 17:54 ` [dpdk-dev] [PATCH v3 3/3] doc: update " Bing Zhao
2021-04-16 18:22 ` Thomas Monjalon
2021-04-16 18:30 ` Ajit Khaparde
2021-04-19 17:28 ` Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 0/3] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 1/3] " Bing Zhao
2021-04-19 17:33 ` Ori Kam
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 2/3] app/testpmd: add CLI for conntrack Bing Zhao
2021-04-19 17:35 ` Ori Kam
2021-04-19 17:16 ` [dpdk-dev] [PATCH v4 3/3] doc: update " Bing Zhao
2021-04-19 17:32 ` Thomas Monjalon
2021-04-19 17:37 ` Ori Kam
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: introduce conntrack flow action and item Bing Zhao
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 1/2] " Bing Zhao
2021-04-19 18:07 ` Thomas Monjalon
2021-04-19 23:29 ` Ferruh Yigit
2021-04-19 17:51 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add CLI for conntrack Bing Zhao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).