* [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
@ 2021-05-27 8:25 Ivan Malov
2021-05-30 7:27 ` Ori Kam
2021-06-14 12:42 ` Singh, Aman Deep
0 siblings, 2 replies; 16+ messages in thread
From: Ivan Malov @ 2021-05-27 8:25 UTC (permalink / raw)
To: dev
Cc: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ori Kam,
Ray Kinsella, Neil Horman
DPDK applications (for example, OvS) or tests which use RTE
flow API need to log created or rejected flow rules to help
to recognise what goes right or wrong. From this standpoint,
testpmd-compliant format is nice for the purpose because it
allows to copy-paste the flow rules and debug using testpmd.
Recognisable pattern items:
VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP,
TCP, VXLAN, NVGRE, GENEVE, MARK, PPPOES, PPPOED.
Recognisable actions:
VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
Recognisable RSS types (action RSS):
IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP, NONFRAG_IPV4_OTHER,
IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP, NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER,
IPV6_EX, IPV6_TCP_EX, IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY,
L4_SRC_ONLY, L4_DST_ONLY.
Unrecognised parts of the flow specification are represented by
tokens "{unknown}" and "{unknown bits}". Interested parties are
welcome to extend this tool to recognise more items and actions.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
lib/ethdev/meson.build | 1 +
lib/ethdev/rte_flow.h | 33 +
lib/ethdev/rte_flow_snprint.c | 1681 +++++++++++++++++++++++++++++++++
lib/ethdev/version.map | 3 +
4 files changed, 1718 insertions(+)
create mode 100644 lib/ethdev/rte_flow_snprint.c
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 0205c853df..97bba4fa1b 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -8,6 +8,7 @@ sources = files(
'rte_class_eth.c',
'rte_ethdev.c',
'rte_flow.c',
+ 'rte_flow_snprint.c',
'rte_mtr.c',
'rte_tm.c',
)
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 961a5884fe..cd5e9ef631 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
struct rte_flow_item *items,
uint32_t num_of_items,
struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Dump testpmd-compliant textual representation of the flow rule.
+ * Invoke this with zero-size buffer to learn the string size and
+ * invoke this for the second time to actually dump the flow rule.
+ * The buffer size on the second invocation = the string size + 1.
+ *
+ * @param[out] buf
+ * Buffer to save the dump in, or NULL
+ * @param buf_size
+ * Buffer size, or 0
+ * @param[out] nb_chars_total
+ * Resulting string size (excluding the terminating null byte)
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] pattern
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise
+ */
+__rte_experimental
+int
+rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[]);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/ethdev/rte_flow_snprint.c b/lib/ethdev/rte_flow_snprint.c
new file mode 100644
index 0000000000..513886528b
--- /dev/null
+++ b/lib/ethdev/rte_flow_snprint.c
@@ -0,0 +1,1681 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2021 Xilinx, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include "rte_ethdev.h"
+#include "rte_flow.h"
+
+static int
+rte_flow_snprint_str(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ const char *str = value_ptr;
+ size_t write_size_max;
+ int retv;
+
+ write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
+ retv = snprintf(buf + *nb_chars_total, write_size_max, " %s", str);
+ if (retv < 0)
+ return -EFAULT;
+
+ *nb_chars_total += retv;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_ether_addr(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ const struct rte_ether_addr *ea = value_ptr;
+ const uint8_t *ab = ea->addr_bytes;
+ size_t write_size_max;
+ int retv;
+
+ write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
+ retv = snprintf(buf + *nb_chars_total, write_size_max,
+ " %02x:%02x:%02x:%02x:%02x:%02x",
+ ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
+ if (retv < 0)
+ return -EFAULT;
+
+ *nb_chars_total += retv;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_ipv4_addr(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ char addr_str[INET_ADDRSTRLEN];
+
+ if (inet_ntop(AF_INET, value_ptr, addr_str, sizeof(addr_str)) == NULL)
+ return -EFAULT;
+
+ return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
+}
+
+static int
+rte_flow_snprint_ipv6_addr(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ char addr_str[INET6_ADDRSTRLEN];
+
+ if (inet_ntop(AF_INET6, value_ptr, addr_str, sizeof(addr_str)) == NULL)
+ return -EFAULT;
+
+ return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
+}
+
+#define SNPRINT(_type, _fmt) \
+ do { \
+ const _type *vp = value_ptr; \
+ size_t write_size_max; \
+ int retv; \
+ \
+ write_size_max = buf_size - \
+ RTE_MIN(buf_size, *nb_chars_total); \
+ retv = snprintf(buf + *nb_chars_total, write_size_max, \
+ _fmt, *vp); \
+ if (retv < 0) \
+ return -EFAULT; \
+ \
+ *nb_chars_total += retv; \
+ \
+ return 0; \
+ } while (0)
+
+static int
+rte_flow_snprint_uint32(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint32_t, " %u");
+}
+
+static int
+rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint32_t, " 0x%08x");
+}
+
+static int
+rte_flow_snprint_hex24(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint32_t, " 0x%06x");
+}
+
+static int
+rte_flow_snprint_hex20(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint32_t, " 0x%05x");
+}
+
+static int
+rte_flow_snprint_uint16(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint16_t, " %hu");
+}
+
+static int
+rte_flow_snprint_uint16_be2cpu(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *value_ptr)
+{
+ const uint16_t *valuep = value_ptr;
+ uint16_t value = rte_be_to_cpu_16(*valuep);
+
+ value_ptr = &value;
+
+ SNPRINT(uint16_t, " %hu");
+}
+
+static int
+rte_flow_snprint_hex16_be2cpu(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *value_ptr)
+{
+ const uint16_t *valuep = value_ptr;
+ uint16_t value = rte_be_to_cpu_16(*valuep);
+
+ value_ptr = &value;
+
+ SNPRINT(uint16_t, " 0x%04x");
+}
+
+static int
+rte_flow_snprint_uint8(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint8_t, " %hhu");
+}
+
+static int
+rte_flow_snprint_hex8(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint8_t, " 0x%02x");
+}
+
+static int
+rte_flow_snprint_byte(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *value_ptr)
+{
+ SNPRINT(uint8_t, "%02x");
+}
+
+#undef SNPRINT
+
+static int
+rte_flow_snprint_attr(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_attr *attr)
+{
+ int rc;
+
+ if (attr == NULL)
+ return 0;
+
+ if (attr->group != 0) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "group");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
+ &attr->group);
+ if (rc != 0)
+ return rc;
+ }
+
+ if (attr->priority != 0) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "priority");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
+ &attr->priority);
+ if (rc != 0)
+ return rc;
+ }
+
+ if (attr->transfer) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "transfer");
+ if (rc != 0)
+ return rc;
+ }
+
+ if (attr->ingress) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "ingress");
+ if (rc != 0)
+ return rc;
+ }
+
+ if (attr->egress) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "egress");
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
+static void
+rte_flow_item_init_parse(const struct rte_flow_item *item, size_t item_size,
+ void *spec, void *last, void *mask)
+{
+ if (item->spec != NULL)
+ memcpy(spec, item->spec, item_size);
+ else
+ memset(spec, 0, item_size);
+
+ if (item->last != NULL)
+ memcpy(last, item->last, item_size);
+ else
+ memset(last, 0, item_size);
+
+ if (item->mask != NULL)
+ memcpy(mask, item->mask, item_size);
+ else
+ memset(mask, 0, item_size);
+}
+
+static bool
+rte_flow_buf_is_all_zeros(const void *buf_ptr, size_t buf_size)
+{
+ const uint8_t *buf = buf_ptr;
+ unsigned int i;
+ uint8_t t = 0;
+
+ for (i = 0; i < buf_size; ++i)
+ t |= buf[i];
+
+ return (t == 0);
+}
+
+static bool
+rte_flow_buf_is_all_ones(const void *buf_ptr, size_t buf_size)
+{
+ const uint8_t *buf = buf_ptr;
+ unsigned int i;
+ uint8_t t = ~0;
+
+ for (i = 0; i < buf_size; ++i)
+ t &= buf[i];
+
+ return (t == (uint8_t)(~0));
+}
+
+static int
+rte_flow_snprint_item_field(char *buf, size_t buf_size, size_t *nb_chars_total,
+ int (*value_dump_cb)(char *, size_t, size_t *,
+ const void *),
+ int (*mask_dump_cb)(char *, size_t, size_t *,
+ const void *),
+ const char *field_name, size_t field_size,
+ void *field_spec, void *field_last,
+ void *field_mask, void *field_full_mask)
+{
+ bool mask_is_all_ones;
+ bool last_is_futile;
+ int rc;
+
+ if (rte_flow_buf_is_all_zeros(field_mask, field_size))
+ return 0;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, field_name);
+ if (rc != 0)
+ return rc;
+
+ if (field_full_mask != NULL) {
+ mask_is_all_ones = (memcmp(field_mask, field_full_mask,
+ field_size) == 0);
+ } else {
+ mask_is_all_ones = rte_flow_buf_is_all_ones(field_mask,
+ field_size);
+ }
+ last_is_futile = rte_flow_buf_is_all_zeros(field_last, field_size) ||
+ (memcmp(field_spec, field_last, field_size) == 0);
+
+ if (mask_is_all_ones && last_is_futile) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "is");
+ if (rc != 0)
+ return rc;
+
+ rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
+ if (rc != 0)
+ return rc;
+
+ goto done;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "spec");
+ if (rc != 0)
+ return rc;
+
+ rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
+ if (rc != 0)
+ return rc;
+
+ if (!last_is_futile) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ field_name);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "last");
+ if (rc != 0)
+ return rc;
+
+ rc = value_dump_cb(buf, buf_size, nb_chars_total, field_last);
+ if (rc != 0)
+ return rc;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, field_name);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "mask");
+ if (rc != 0)
+ return rc;
+
+ rc = mask_dump_cb(buf, buf_size, nb_chars_total, field_mask);
+ if (rc != 0)
+ return rc;
+
+done:
+ /*
+ * Zeroise the printed field. When all item fields have been printed,
+ * the corresponding item handler will make sure that the whole item
+ * mask is all-zeros. This is needed to highlight unsupported fields.
+ *
+ * If the provided field mask pointer refers to a separate container
+ * rather than to the field in the item mask directly, it's the duty
+ * of the item handler to clear the field in the item mask correctly.
+ */
+ memset(field_mask, 0, field_size);
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_vf(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_vf *spec = spec_ptr;
+ struct rte_flow_item_vf *last = last_ptr;
+ struct rte_flow_item_vf *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex32, "id",
+ sizeof(spec->id), &spec->id, &last->id,
+ &mask->id, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_phy_port(char *buf, size_t buf_size,
+ size_t *nb_chars_total, void *spec_ptr,
+ void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_phy_port *spec = spec_ptr;
+ struct rte_flow_item_phy_port *last = last_ptr;
+ struct rte_flow_item_phy_port *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex32, "index",
+ sizeof(spec->index), &spec->index,
+ &last->index, &mask->index, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_port_id(char *buf, size_t buf_size,
+ size_t *nb_chars_total, void *spec_ptr,
+ void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_port_id *spec = spec_ptr;
+ struct rte_flow_item_port_id *last = last_ptr;
+ struct rte_flow_item_port_id *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex32, "id",
+ sizeof(spec->id), &spec->id, &last->id,
+ &mask->id, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_eth(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_eth *spec = spec_ptr;
+ struct rte_flow_item_eth *last = last_ptr;
+ struct rte_flow_item_eth *mask = mask_ptr;
+ uint8_t has_vlan_full_mask = 1;
+ uint8_t has_vlan_spec;
+ uint8_t has_vlan_last;
+ uint8_t has_vlan_mask;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ether_addr,
+ rte_flow_snprint_ether_addr, "dst",
+ sizeof(spec->hdr.d_addr),
+ &spec->hdr.d_addr, &last->hdr.d_addr,
+ &mask->hdr.d_addr, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ether_addr,
+ rte_flow_snprint_ether_addr, "src",
+ sizeof(spec->hdr.s_addr),
+ &spec->hdr.s_addr, &last->hdr.s_addr,
+ &mask->hdr.s_addr, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "type",
+ sizeof(spec->hdr.ether_type),
+ &spec->hdr.ether_type,
+ &last->hdr.ether_type,
+ &mask->hdr.ether_type, NULL);
+ if (rc != 0)
+ return rc;
+
+ has_vlan_spec = spec->has_vlan;
+ has_vlan_last = last->has_vlan;
+ has_vlan_mask = mask->has_vlan;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_uint8, "has_vlan",
+ sizeof(has_vlan_spec), &has_vlan_spec,
+ &has_vlan_last, &has_vlan_mask,
+ &has_vlan_full_mask);
+ if (rc != 0)
+ return rc;
+
+ mask->has_vlan = 0;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_vlan(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_vlan *spec = spec_ptr;
+ struct rte_flow_item_vlan *last = last_ptr;
+ struct rte_flow_item_vlan *mask = mask_ptr;
+ uint8_t has_more_vlan_full_mask = 1;
+ uint8_t has_more_vlan_spec;
+ uint8_t has_more_vlan_last;
+ uint8_t has_more_vlan_mask;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "tci",
+ sizeof(spec->hdr.vlan_tci),
+ &spec->hdr.vlan_tci,
+ &last->hdr.vlan_tci,
+ &mask->hdr.vlan_tci, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu,
+ "inner_type",
+ sizeof(spec->hdr.eth_proto),
+ &spec->hdr.eth_proto,
+ &last->hdr.eth_proto,
+ &mask->hdr.eth_proto, NULL);
+ if (rc != 0)
+ return rc;
+
+ has_more_vlan_spec = spec->has_more_vlan;
+ has_more_vlan_last = last->has_more_vlan;
+ has_more_vlan_mask = mask->has_more_vlan;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_uint8,
+ "has_more_vlan",
+ sizeof(has_more_vlan_spec),
+ &has_more_vlan_spec,
+ &has_more_vlan_last,
+ &has_more_vlan_mask,
+ &has_more_vlan_full_mask);
+ if (rc != 0)
+ return rc;
+
+ mask->has_more_vlan = 0;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_ipv4(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_ipv4 *spec = spec_ptr;
+ struct rte_flow_item_ipv4 *last = last_ptr;
+ struct rte_flow_item_ipv4 *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex8,
+ rte_flow_snprint_hex8, "tos",
+ sizeof(spec->hdr.type_of_service),
+ &spec->hdr.type_of_service,
+ &last->hdr.type_of_service,
+ &mask->hdr.type_of_service, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu,
+ "packet_id",
+ sizeof(spec->hdr.packet_id),
+ &spec->hdr.packet_id,
+ &last->hdr.packet_id,
+ &mask->hdr.packet_id, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu,
+ "fragment_offset",
+ sizeof(spec->hdr.fragment_offset),
+ &spec->hdr.fragment_offset,
+ &last->hdr.fragment_offset,
+ &mask->hdr.fragment_offset, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_hex8, "ttl",
+ sizeof(spec->hdr.time_to_live),
+ &spec->hdr.time_to_live,
+ &last->hdr.time_to_live,
+ &mask->hdr.time_to_live, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_hex8, "proto",
+ sizeof(spec->hdr.next_proto_id),
+ &spec->hdr.next_proto_id,
+ &last->hdr.next_proto_id,
+ &mask->hdr.next_proto_id, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ipv4_addr,
+ rte_flow_snprint_ipv4_addr, "src",
+ sizeof(spec->hdr.src_addr),
+ &spec->hdr.src_addr,
+ &last->hdr.src_addr,
+ &mask->hdr.src_addr, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ipv4_addr,
+ rte_flow_snprint_ipv4_addr, "dst",
+ sizeof(spec->hdr.dst_addr),
+ &spec->hdr.dst_addr,
+ &last->hdr.dst_addr,
+ &mask->hdr.dst_addr, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_ipv6(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ uint32_t tc_full_mask = (RTE_IPV6_HDR_TC_MASK >> RTE_IPV6_HDR_TC_SHIFT);
+ uint32_t fl_full_mask = (RTE_IPV6_HDR_FL_MASK >> RTE_IPV6_HDR_FL_SHIFT);
+ struct rte_flow_item_ipv6 *spec = spec_ptr;
+ struct rte_flow_item_ipv6 *last = last_ptr;
+ struct rte_flow_item_ipv6 *mask = mask_ptr;
+ uint8_t has_frag_ext_full_mask = 1;
+ uint8_t has_frag_ext_spec;
+ uint8_t has_frag_ext_last;
+ uint8_t has_frag_ext_mask;
+ uint32_t vtc_flow;
+ uint32_t fl_spec;
+ uint32_t fl_last;
+ uint32_t fl_mask;
+ uint32_t tc_spec;
+ uint32_t tc_last;
+ uint32_t tc_mask;
+ int rc;
+
+ vtc_flow = rte_be_to_cpu_32(spec->hdr.vtc_flow);
+ tc_spec = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ fl_spec = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >> RTE_IPV6_HDR_FL_SHIFT;
+
+ vtc_flow = rte_be_to_cpu_32(last->hdr.vtc_flow);
+ tc_last = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ fl_last = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >> RTE_IPV6_HDR_FL_SHIFT;
+
+ vtc_flow = rte_be_to_cpu_32(mask->hdr.vtc_flow);
+ tc_mask = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ fl_mask = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >> RTE_IPV6_HDR_FL_SHIFT;
+
+ mask->hdr.vtc_flow &= ~rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK |
+ RTE_IPV6_HDR_FL_MASK);
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex8,
+ rte_flow_snprint_hex8, "tc",
+ sizeof(tc_spec), &tc_spec, &tc_last,
+ &tc_mask, &tc_full_mask);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex20, "flow",
+ sizeof(fl_spec), &fl_spec, &fl_last,
+ &fl_mask, &fl_full_mask);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_hex8, "proto",
+ sizeof(spec->hdr.proto),
+ &spec->hdr.proto,
+ &last->hdr.proto,
+ &mask->hdr.proto, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_hex8, "hop",
+ sizeof(spec->hdr.hop_limits),
+ &spec->hdr.hop_limits,
+ &last->hdr.hop_limits,
+ &mask->hdr.hop_limits, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ipv6_addr,
+ rte_flow_snprint_ipv6_addr, "src",
+ sizeof(spec->hdr.src_addr),
+ &spec->hdr.src_addr,
+ &last->hdr.src_addr,
+ &mask->hdr.src_addr, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_ipv6_addr,
+ rte_flow_snprint_ipv6_addr, "dst",
+ sizeof(spec->hdr.dst_addr),
+ &spec->hdr.dst_addr,
+ &last->hdr.dst_addr,
+ &mask->hdr.dst_addr, NULL);
+
+ has_frag_ext_spec = spec->has_frag_ext;
+ has_frag_ext_last = last->has_frag_ext;
+ has_frag_ext_mask = mask->has_frag_ext;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint8,
+ rte_flow_snprint_uint8, "has_frag_ext",
+ sizeof(has_frag_ext_spec),
+ &has_frag_ext_spec, &has_frag_ext_last,
+ &has_frag_ext_mask,
+ &has_frag_ext_full_mask);
+ if (rc != 0)
+ return rc;
+
+ mask->has_frag_ext = 0;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_udp(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_udp *spec = spec_ptr;
+ struct rte_flow_item_udp *last = last_ptr;
+ struct rte_flow_item_udp *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "src",
+ sizeof(spec->hdr.src_port),
+ &spec->hdr.src_port,
+ &last->hdr.src_port,
+ &mask->hdr.src_port, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "dst",
+ sizeof(spec->hdr.dst_port),
+ &spec->hdr.dst_port,
+ &last->hdr.dst_port,
+ &mask->hdr.dst_port, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_tcp(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_tcp *spec = spec_ptr;
+ struct rte_flow_item_tcp *last = last_ptr;
+ struct rte_flow_item_tcp *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "src",
+ sizeof(spec->hdr.src_port),
+ &spec->hdr.src_port,
+ &last->hdr.src_port,
+ &mask->hdr.src_port, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu, "dst",
+ sizeof(spec->hdr.dst_port),
+ &spec->hdr.dst_port,
+ &last->hdr.dst_port,
+ &mask->hdr.dst_port, NULL);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex8,
+ rte_flow_snprint_hex8, "flags",
+ sizeof(spec->hdr.tcp_flags),
+ &spec->hdr.tcp_flags,
+ &last->hdr.tcp_flags,
+ &mask->hdr.tcp_flags, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_vxlan(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_vxlan *spec = spec_ptr;
+ struct rte_flow_item_vxlan *last = last_ptr;
+ struct rte_flow_item_vxlan *mask = mask_ptr;
+ uint32_t vni_full_mask = 0xffffff;
+ uint32_t vni_spec;
+ uint32_t vni_last;
+ uint32_t vni_mask;
+ int rc;
+
+ vni_spec = rte_be_to_cpu_32(spec->hdr.vx_vni) >> 8;
+ vni_last = rte_be_to_cpu_32(last->hdr.vx_vni) >> 8;
+ vni_mask = rte_be_to_cpu_32(mask->hdr.vx_vni) >> 8;
+
+ mask->hdr.vx_vni &= ~RTE_BE32(0xffffff00);
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex24, "vni",
+ sizeof(vni_spec), &vni_spec,
+ &vni_last, &vni_mask,
+ &vni_full_mask);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_nvgre(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_nvgre *spec = spec_ptr;
+ struct rte_flow_item_nvgre *last = last_ptr;
+ struct rte_flow_item_nvgre *mask = mask_ptr;
+ uint32_t *tni_and_flow_id_specp = (uint32_t *)spec->tni;
+ uint32_t *tni_and_flow_id_lastp = (uint32_t *)last->tni;
+ uint32_t *tni_and_flow_id_maskp = (uint32_t *)mask->tni;
+ uint32_t tni_full_mask = 0xffffff;
+ uint32_t tni_spec;
+ uint32_t tni_last;
+ uint32_t tni_mask;
+ int rc;
+
+ tni_spec = rte_be_to_cpu_32(*tni_and_flow_id_specp) >> 8;
+ tni_last = rte_be_to_cpu_32(*tni_and_flow_id_lastp) >> 8;
+ tni_mask = rte_be_to_cpu_32(*tni_and_flow_id_maskp) >> 8;
+
+ memset(mask->tni, 0, sizeof(mask->tni));
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex24, "tni",
+ sizeof(tni_spec), &tni_spec,
+ &tni_last, &tni_mask,
+ &tni_full_mask);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_geneve(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_geneve *spec = spec_ptr;
+ struct rte_flow_item_geneve *last = last_ptr;
+ struct rte_flow_item_geneve *mask = mask_ptr;
+ uint32_t *vni_and_rsvd_specp = (uint32_t *)spec->vni;
+ uint32_t *vni_and_rsvd_lastp = (uint32_t *)last->vni;
+ uint32_t *vni_and_rsvd_maskp = (uint32_t *)mask->vni;
+ uint32_t vni_full_mask = 0xffffff;
+ uint16_t optlen_full_mask = 0x3f;
+ uint16_t optlen_spec;
+ uint16_t optlen_last;
+ uint16_t optlen_mask;
+ uint32_t vni_spec;
+ uint32_t vni_last;
+ uint32_t vni_mask;
+ int rc;
+
+ optlen_spec = rte_be_to_cpu_16(spec->ver_opt_len_o_c_rsvd0) & 0x3f00;
+ optlen_spec >>= 8;
+
+ optlen_last = rte_be_to_cpu_16(last->ver_opt_len_o_c_rsvd0) & 0x3f00;
+ optlen_last >>= 8;
+
+ optlen_mask = rte_be_to_cpu_16(mask->ver_opt_len_o_c_rsvd0) & 0x3f00;
+ optlen_mask >>= 8;
+
+ mask->ver_opt_len_o_c_rsvd0 &= ~RTE_BE16(0x3f00);
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16,
+ rte_flow_snprint_hex8, "optlen",
+ sizeof(optlen_spec), &optlen_spec,
+ &optlen_last, &optlen_mask,
+ &optlen_full_mask);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_hex16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu,
+ "protocol", sizeof(spec->protocol),
+ &spec->protocol, &last->protocol,
+ &mask->protocol, NULL);
+ if (rc != 0)
+ return rc;
+
+ vni_spec = rte_be_to_cpu_32(*vni_and_rsvd_specp) >> 8;
+ vni_last = rte_be_to_cpu_32(*vni_and_rsvd_lastp) >> 8;
+ vni_mask = rte_be_to_cpu_32(*vni_and_rsvd_maskp) >> 8;
+
+ memset(mask->vni, 0, sizeof(mask->vni));
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex24, "vni",
+ sizeof(vni_spec), &vni_spec,
+ &vni_last, &vni_mask,
+ &vni_full_mask);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_mark(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_mark *spec = spec_ptr;
+ struct rte_flow_item_mark *last = last_ptr;
+ struct rte_flow_item_mark *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint32,
+ rte_flow_snprint_hex32, "id",
+ sizeof(spec->id), &spec->id,
+ &last->id, &mask->id, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_item_pppoed(char *buf, size_t buf_size, size_t *nb_chars_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr)
+{
+ struct rte_flow_item_pppoe *spec = spec_ptr;
+ struct rte_flow_item_pppoe *last = last_ptr;
+ struct rte_flow_item_pppoe *mask = mask_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
+ rte_flow_snprint_uint16_be2cpu,
+ rte_flow_snprint_hex16_be2cpu,
+ "seid", sizeof(spec->session_id),
+ &spec->session_id, &last->session_id,
+ &mask->session_id, NULL);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static const struct {
+ const char *name;
+ int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_char_total,
+ void *spec_ptr, void *last_ptr, void *mask_ptr);
+ size_t size;
+} item_table[] = {
+ [RTE_FLOW_ITEM_TYPE_VOID] = {
+ .name = "void"
+ },
+ [RTE_FLOW_ITEM_TYPE_PF] = {
+ .name = "pf"
+ },
+ [RTE_FLOW_ITEM_TYPE_PPPOES] = {
+ .name = "pppoes"
+ },
+ [RTE_FLOW_ITEM_TYPE_PPPOED] = {
+ .name = "pppoed",
+ .parse_cb = rte_flow_snprint_item_pppoed,
+ .size = sizeof(struct rte_flow_item_pppoe)
+ },
+
+#define ITEM(_name_uppercase, _name_lowercase) \
+ [RTE_FLOW_ITEM_TYPE_##_name_uppercase] = { \
+ .name = #_name_lowercase, \
+ .parse_cb = rte_flow_snprint_item_##_name_lowercase, \
+ .size = sizeof(struct rte_flow_item_##_name_lowercase) \
+ }
+
+ ITEM(VF, vf),
+ ITEM(PHY_PORT, phy_port),
+ ITEM(PORT_ID, port_id),
+ ITEM(ETH, eth),
+ ITEM(VLAN, vlan),
+ ITEM(IPV4, ipv4),
+ ITEM(IPV6, ipv6),
+ ITEM(UDP, udp),
+ ITEM(TCP, tcp),
+ ITEM(VXLAN, vxlan),
+ ITEM(NVGRE, nvgre),
+ ITEM(GENEVE, geneve),
+ ITEM(MARK, mark),
+
+#undef ITEM
+};
+
+static int
+rte_flow_snprint_item(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_item *item)
+{
+ int rc;
+
+ if (item->type < 0 || item->type >= RTE_DIM(item_table) ||
+ item_table[item->type].name == NULL) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "{unknown}");
+ if (rc != 0)
+ return rc;
+
+ goto out;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ item_table[item->type].name);
+ if (rc != 0)
+ return rc;
+
+ if (item_table[item->type].parse_cb != NULL) {
+ size_t item_size = item_table[item->type].size;
+ uint8_t spec[item_size];
+ uint8_t last[item_size];
+ uint8_t mask[item_size];
+
+ rte_flow_item_init_parse(item, item_size, spec, last, mask);
+
+ rc = item_table[item->type].parse_cb(buf, buf_size,
+ nb_chars_total,
+ spec, last, mask);
+ if (rc != 0)
+ return rc;
+
+ if (!rte_flow_buf_is_all_zeros(mask, item_size)) {
+ rc = rte_flow_snprint_str(buf, buf_size,
+ nb_chars_total,
+ "{unknown bits}");
+ if (rc != 0)
+ return rc;
+ }
+ }
+
+out:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_pattern(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_item pattern[])
+{
+ const struct rte_flow_item *item;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "pattern");
+ if (rc != 0)
+ return rc;
+
+ if (pattern == NULL)
+ goto end;
+
+ for (item = pattern;
+ item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ rc = rte_flow_snprint_item(buf, buf_size, nb_chars_total, item);
+ if (rc != 0)
+ return rc;
+ }
+
+end:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_jump(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_jump *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "group");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
+ &conf->group);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_mark(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_mark *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf->id);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_queue(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *conf_ptr)
+{
+ const struct rte_flow_action_queue *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
+ &conf->index);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_count(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *conf_ptr)
+{
+ const struct rte_flow_action_count *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "identifier");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf->id);
+ if (rc != 0)
+ return rc;
+
+ if (conf->shared) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "shared");
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_rss_func(char *buf, size_t buf_size,
+ size_t *nb_chars_total,
+ enum rte_eth_hash_function func)
+{
+ int rc;
+
+ if (func == RTE_ETH_HASH_FUNCTION_DEFAULT)
+ return 0;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "func");
+ if (rc != 0)
+ return rc;
+
+ switch (func) {
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "toeplitz");
+ break;
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "simple_xor");
+ break;
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "symmetric_toeplitz");
+ break;
+ default:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "{unknown}");
+ break;
+ }
+
+ return rc;
+}
+
+static int
+rte_flow_snprint_action_rss_level(char *buf, size_t buf_size,
+ size_t *nb_chars_total, uint32_t level)
+{
+ int rc;
+
+ if (level == 0)
+ return 0;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "level");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &level);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static const struct {
+ const char *name;
+ uint64_t flag;
+} rss_type_table[] = {
+ { "ipv4", ETH_RSS_IPV4 },
+ { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
+ { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
+ { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
+ { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
+ { "ipv6", ETH_RSS_IPV6 },
+ { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
+ { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
+ { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
+ { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
+ { "ipv6-ex", ETH_RSS_IPV6_EX },
+ { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
+ { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
+
+ { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
+ { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
+ { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
+ { "l4-dst-only", ETH_RSS_L4_DST_ONLY },
+};
+
+static int
+rte_flow_snprint_action_rss_types(char *buf, size_t buf_size,
+ size_t *nb_chars_total, uint64_t types)
+{
+ unsigned int i;
+ int rc;
+
+ if (types == 0)
+ return 0;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "types");
+ if (rc != 0)
+ return rc;
+
+ for (i = 0; i < RTE_DIM(rss_type_table); ++i) {
+ uint64_t flag = rss_type_table[i].flag;
+
+ if ((types & flag) == 0)
+ continue;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ rss_type_table[i].name);
+ if (rc != 0)
+ return rc;
+
+ types &= ~flag;
+ }
+
+ if (types != 0) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "{unknown}");
+ if (rc != 0)
+ return rc;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_rss_queues(char *buf, size_t buf_size,
+ size_t *nb_chars_total,
+ const uint16_t *queues,
+ unsigned int nb_queues)
+{
+ unsigned int i;
+ int rc;
+
+ if (nb_queues == 0)
+ return 0;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "queues");
+ if (rc != 0)
+ return rc;
+
+ for (i = 0; i < nb_queues; ++i) {
+ rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
+ &queues[i]);
+ if (rc != 0)
+ return rc;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_rss(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_rss *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_action_rss_func(buf, buf_size, nb_chars_total,
+ conf->func);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_action_rss_level(buf, buf_size, nb_chars_total,
+ conf->level);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_action_rss_types(buf, buf_size, nb_chars_total,
+ conf->types);
+ if (rc != 0)
+ return rc;
+
+ if (conf->key_len != 0) {
+ if (conf->key != NULL) {
+ unsigned int i;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "" /* results in space */);
+ if (rc != 0)
+ return rc;
+
+ for (i = 0; i < conf->key_len; ++i) {
+ rc = rte_flow_snprint_byte(buf, buf_size,
+ nb_chars_total,
+ &conf->key[i]);
+ if (rc != 0)
+ return rc;
+ }
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "key_len");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
+ &conf->key_len);
+ if (rc != 0)
+ return rc;
+ }
+
+ rc = rte_flow_snprint_action_rss_queues(buf, buf_size, nb_chars_total,
+ conf->queue, conf->queue_num);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_vf(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_vf *conf = conf_ptr;
+ int rc;
+
+ if (conf->original) {
+ return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "original on");
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf->id);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_phy_port(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *conf_ptr)
+{
+ const struct rte_flow_action_phy_port *conf = conf_ptr;
+ int rc;
+
+ if (conf->original) {
+ return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "original on");
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
+ &conf->index);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_port_id(char *buf, size_t buf_size,
+ size_t *nb_chars_total, const void *conf_ptr)
+{
+ const struct rte_flow_action_port_id *conf = conf_ptr;
+ int rc;
+
+ if (conf->original) {
+ return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "original on");
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf->id);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_of_push_vlan(char *buf, size_t buf_size,
+ size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_of_push_vlan *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "ethertype");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_hex16_be2cpu(buf, buf_size, nb_chars_total,
+ &conf->ethertype);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_of_set_vlan_vid(char *buf, size_t buf_size,
+ size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_of_set_vlan_vid *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_vid");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint16_be2cpu(buf, buf_size, nb_chars_total,
+ &conf->vlan_vid);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_action_of_set_vlan_pcp(char *buf, size_t buf_size,
+ size_t *nb_chars_total,
+ const void *conf_ptr)
+{
+ const struct rte_flow_action_of_set_vlan_pcp *conf = conf_ptr;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_pcp");
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_uint8(buf, buf_size, nb_chars_total,
+ &conf->vlan_pcp);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static const struct {
+ const char *name;
+ int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const void *conf_ptr);
+} action_table[] = {
+ [RTE_FLOW_ACTION_TYPE_VOID] = {
+ .name = "void"
+ },
+ [RTE_FLOW_ACTION_TYPE_FLAG] = {
+ .name = "flag"
+ },
+ [RTE_FLOW_ACTION_TYPE_DROP] = {
+ .name = "drop"
+ },
+ [RTE_FLOW_ACTION_TYPE_PF] = {
+ .name = "pf"
+ },
+ [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
+ .name = "of_pop_vlan"
+ },
+ [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
+ .name = "vxlan_encap"
+ },
+ [RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
+ .name = "vxlan_decap"
+ },
+
+#define ACTION(_name_uppercase, _name_lowercase) \
+ [RTE_FLOW_ACTION_TYPE_##_name_uppercase] = { \
+ .name = #_name_lowercase, \
+ .parse_cb = rte_flow_snprint_action_##_name_lowercase, \
+ }
+
+ ACTION(JUMP, jump),
+ ACTION(MARK, mark),
+ ACTION(QUEUE, queue),
+ ACTION(COUNT, count),
+ ACTION(RSS, rss),
+ ACTION(VF, vf),
+ ACTION(PHY_PORT, phy_port),
+ ACTION(PORT_ID, port_id),
+ ACTION(OF_PUSH_VLAN, of_push_vlan),
+ ACTION(OF_SET_VLAN_VID, of_set_vlan_vid),
+ ACTION(OF_SET_VLAN_PCP, of_set_vlan_pcp),
+
+#undef ACTION
+};
+
+static int
+rte_flow_snprint_action(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_action *action)
+{
+ int rc;
+
+ if (action->type < 0 || action->type >= RTE_DIM(action_table) ||
+ action_table[action->type].name == NULL) {
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ "{unknown}");
+ if (rc != 0)
+ return rc;
+
+ goto out;
+ }
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
+ action_table[action->type].name);
+ if (rc != 0)
+ return rc;
+
+ if (action_table[action->type].parse_cb != NULL &&
+ action->conf != NULL) {
+ rc = action_table[action->type].parse_cb(buf, buf_size,
+ nb_chars_total,
+ action->conf);
+ if (rc != 0)
+ return rc;
+ }
+
+out:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+rte_flow_snprint_actions(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_action actions[])
+{
+ const struct rte_flow_action *action;
+ int rc;
+
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "actions");
+ if (rc != 0)
+ return rc;
+
+ if (actions == NULL)
+ goto end;
+
+ for (action = actions;
+ action->type != RTE_FLOW_ACTION_TYPE_END; ++action) {
+ rc = rte_flow_snprint_action(buf, buf_size, nb_chars_total,
+ action);
+ if (rc != 0)
+ return rc;
+ }
+
+end:
+ rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
+
+int
+rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[])
+{
+ int rc;
+
+ if (buf == NULL && buf_size != 0)
+ return -EINVAL;
+
+ *nb_chars_total = 0;
+
+ rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total, pattern);
+ if (rc != 0)
+ return rc;
+
+ rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total, actions);
+ if (rc != 0)
+ return rc;
+
+ return 0;
+}
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 44d30b05ae..a626cac944 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -249,6 +249,9 @@ EXPERIMENTAL {
rte_mtr_meter_policy_delete;
rte_mtr_meter_policy_update;
rte_mtr_meter_policy_validate;
+
+ # added in 21.08
+ rte_flow_snprint;
};
INTERNAL {
--
2.20.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-27 8:25 [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping Ivan Malov
@ 2021-05-30 7:27 ` Ori Kam
2021-05-31 2:28 ` Stephen Hemminger
` (2 more replies)
2021-06-14 12:42 ` Singh, Aman Deep
1 sibling, 3 replies; 16+ messages in thread
From: Ori Kam @ 2021-05-30 7:27 UTC (permalink / raw)
To: Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Ray Kinsella, Neil Horman
Hi Ivan,
First nice idea and thanks for the picking up the ball.
Before a detail review,
The main thing I'm concerned about is that this print will be partially supported,
I know that you covered this issue by printing unknown for unsupported item/actions,
but this will mean that it is enough that one item/action is not supported and already the
flow can't be used in testpmd.
To get full support it means that the developer needs to add such print with each new
item/action. I agree it is possible, but it has high overhead for each feature.
Maybe we should somehow create a macros for the prints or other easier to support ways.
For example, just printing the ipv4 has 7 function calls inside of it each one with error checking,
and I'm not counting the dedicated functions.
Best,
Ori
> -----Original Message-----
> From: Ivan Malov <ivan.malov@oktetlabs.ru>
> Sent: Thursday, May 27, 2021 11:25 AM
> To: dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow rule
> dumping
>
> DPDK applications (for example, OvS) or tests which use RTE flow API need to
> log created or rejected flow rules to help to recognise what goes right or
> wrong. From this standpoint, testpmd-compliant format is nice for the
> purpose because it allows to copy-paste the flow rules and debug using
> testpmd.
>
> Recognisable pattern items:
> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP, VXLAN,
> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>
> Recognisable actions:
> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>
> Recognisable RSS types (action RSS):
> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY, L4_DST_ONLY.
>
> Unrecognised parts of the flow specification are represented by tokens
> "{unknown}" and "{unknown bits}". Interested parties are welcome to
> extend this tool to recognise more items and actions.
>
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> ---
> lib/ethdev/meson.build | 1 +
> lib/ethdev/rte_flow.h | 33 +
> lib/ethdev/rte_flow_snprint.c | 1681
> +++++++++++++++++++++++++++++++++
> lib/ethdev/version.map | 3 +
> 4 files changed, 1718 insertions(+)
> create mode 100644 lib/ethdev/rte_flow_snprint.c
>
> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
> 0205c853df..97bba4fa1b 100644
> --- a/lib/ethdev/meson.build
> +++ b/lib/ethdev/meson.build
> @@ -8,6 +8,7 @@ sources = files(
> 'rte_class_eth.c',
> 'rte_ethdev.c',
> 'rte_flow.c',
> + 'rte_flow_snprint.c',
> 'rte_mtr.c',
> 'rte_tm.c',
> )
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> 961a5884fe..cd5e9ef631 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> struct rte_flow_item *items,
> uint32_t num_of_items,
> struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Dump testpmd-compliant textual representation of the flow rule.
> + * Invoke this with zero-size buffer to learn the string size and
> + * invoke this for the second time to actually dump the flow rule.
> + * The buffer size on the second invocation = the string size + 1.
> + *
> + * @param[out] buf
> + * Buffer to save the dump in, or NULL
> + * @param buf_size
> + * Buffer size, or 0
> + * @param[out] nb_chars_total
> + * Resulting string size (excluding the terminating null byte)
> + * @param[in] attr
> + * Flow rule attributes.
> + * @param[in] pattern
> + * Pattern specification (list terminated by the END pattern item).
> + * @param[in] actions
> + * Associated actions (list terminated by the END action).
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise
> + */
> +__rte_experimental
> +int
> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[]);
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/ethdev/rte_flow_snprint.c b/lib/ethdev/rte_flow_snprint.c
> new file mode 100644 index 0000000000..513886528b
> --- /dev/null
> +++ b/lib/ethdev/rte_flow_snprint.c
> @@ -0,0 +1,1681 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + *
> + * Copyright(c) 2021 Xilinx, Inc.
> + */
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <string.h>
> +
> +#include <rte_common.h>
> +#include "rte_ethdev.h"
> +#include "rte_flow.h"
> +
> +static int
> +rte_flow_snprint_str(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + const char *str = value_ptr;
> + size_t write_size_max;
> + int retv;
> +
> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> + retv = snprintf(buf + *nb_chars_total, write_size_max, " %s", str);
> + if (retv < 0)
> + return -EFAULT;
> +
> + *nb_chars_total += retv;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_ether_addr(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *value_ptr)
> +{
> + const struct rte_ether_addr *ea = value_ptr;
> + const uint8_t *ab = ea->addr_bytes;
> + size_t write_size_max;
> + int retv;
> +
> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> + retv = snprintf(buf + *nb_chars_total, write_size_max,
> + " %02x:%02x:%02x:%02x:%02x:%02x",
> + ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
> + if (retv < 0)
> + return -EFAULT;
> +
> + *nb_chars_total += retv;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_ipv4_addr(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *value_ptr)
> +{
> + char addr_str[INET_ADDRSTRLEN];
> +
> + if (inet_ntop(AF_INET, value_ptr, addr_str, sizeof(addr_str)) ==
> NULL)
> + return -EFAULT;
> +
> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> +}
> +
> +static int
> +rte_flow_snprint_ipv6_addr(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *value_ptr)
> +{
> + char addr_str[INET6_ADDRSTRLEN];
> +
> + if (inet_ntop(AF_INET6, value_ptr, addr_str, sizeof(addr_str)) ==
> NULL)
> + return -EFAULT;
> +
> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> +}
> +
> +#define SNPRINT(_type, _fmt) \
> + do { \
> + const _type *vp = value_ptr; \
> + size_t write_size_max; \
> + int retv; \
> + \
> + write_size_max = buf_size - \
> + RTE_MIN(buf_size, *nb_chars_total); \
> + retv = snprintf(buf + *nb_chars_total, write_size_max,
> \
> + _fmt, *vp); \
> + if (retv < 0) \
> + return -EFAULT;
> \
> + \
> + *nb_chars_total += retv; \
> + \
> + return 0; \
> + } while (0)
> +
> +static int
> +rte_flow_snprint_uint32(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint32_t, " %u");
> +}
> +
> +static int
> +rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint32_t, " 0x%08x");
> +}
> +
> +static int
> +rte_flow_snprint_hex24(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint32_t, " 0x%06x");
> +}
> +
> +static int
> +rte_flow_snprint_hex20(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint32_t, " 0x%05x");
> +}
> +
> +static int
> +rte_flow_snprint_uint16(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint16_t, " %hu");
> +}
> +
> +static int
> +rte_flow_snprint_uint16_be2cpu(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void *value_ptr) {
> + const uint16_t *valuep = value_ptr;
> + uint16_t value = rte_be_to_cpu_16(*valuep);
> +
> + value_ptr = &value;
> +
> + SNPRINT(uint16_t, " %hu");
> +}
> +
> +static int
> +rte_flow_snprint_hex16_be2cpu(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void *value_ptr) {
> + const uint16_t *valuep = value_ptr;
> + uint16_t value = rte_be_to_cpu_16(*valuep);
> +
> + value_ptr = &value;
> +
> + SNPRINT(uint16_t, " 0x%04x");
> +}
> +
> +static int
> +rte_flow_snprint_uint8(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint8_t, " %hhu");
> +}
> +
> +static int
> +rte_flow_snprint_hex8(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint8_t, " 0x%02x");
> +}
> +
> +static int
> +rte_flow_snprint_byte(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint8_t, "%02x");
> +}
> +
> +#undef SNPRINT
> +
> +static int
> +rte_flow_snprint_attr(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_attr *attr) {
> + int rc;
> +
> + if (attr == NULL)
> + return 0;
> +
> + if (attr->group != 0) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "group");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> + &attr->group);
> + if (rc != 0)
> + return rc;
> + }
> +
> + if (attr->priority != 0) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "priority");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> + &attr->priority);
> + if (rc != 0)
> + return rc;
> + }
> +
> + if (attr->transfer) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "transfer");
> + if (rc != 0)
> + return rc;
> + }
> +
> + if (attr->ingress) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "ingress");
> + if (rc != 0)
> + return rc;
> + }
> +
> + if (attr->egress) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "egress");
> + if (rc != 0)
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> +static void
> +rte_flow_item_init_parse(const struct rte_flow_item *item, size_t
> item_size,
> + void *spec, void *last, void *mask) {
> + if (item->spec != NULL)
> + memcpy(spec, item->spec, item_size);
> + else
> + memset(spec, 0, item_size);
> +
> + if (item->last != NULL)
> + memcpy(last, item->last, item_size);
> + else
> + memset(last, 0, item_size);
> +
> + if (item->mask != NULL)
> + memcpy(mask, item->mask, item_size);
> + else
> + memset(mask, 0, item_size);
> +}
> +
> +static bool
> +rte_flow_buf_is_all_zeros(const void *buf_ptr, size_t buf_size) {
> + const uint8_t *buf = buf_ptr;
> + unsigned int i;
> + uint8_t t = 0;
> +
> + for (i = 0; i < buf_size; ++i)
> + t |= buf[i];
> +
> + return (t == 0);
> +}
> +
> +static bool
> +rte_flow_buf_is_all_ones(const void *buf_ptr, size_t buf_size) {
> + const uint8_t *buf = buf_ptr;
> + unsigned int i;
> + uint8_t t = ~0;
> +
> + for (i = 0; i < buf_size; ++i)
> + t &= buf[i];
> +
> + return (t == (uint8_t)(~0));
> +}
> +
> +static int
> +rte_flow_snprint_item_field(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + int (*value_dump_cb)(char *, size_t, size_t *,
> + const void *),
> + int (*mask_dump_cb)(char *, size_t, size_t *,
> + const void *),
> + const char *field_name, size_t field_size,
> + void *field_spec, void *field_last,
> + void *field_mask, void *field_full_mask) {
> + bool mask_is_all_ones;
> + bool last_is_futile;
> + int rc;
> +
> + if (rte_flow_buf_is_all_zeros(field_mask, field_size))
> + return 0;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> field_name);
> + if (rc != 0)
> + return rc;
> +
> + if (field_full_mask != NULL) {
> + mask_is_all_ones = (memcmp(field_mask, field_full_mask,
> + field_size) == 0);
> + } else {
> + mask_is_all_ones = rte_flow_buf_is_all_ones(field_mask,
> + field_size);
> + }
> + last_is_futile = rte_flow_buf_is_all_zeros(field_last, field_size) ||
> + (memcmp(field_spec, field_last, field_size) == 0);
> +
> + if (mask_is_all_ones && last_is_futile) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "is");
> + if (rc != 0)
> + return rc;
> +
> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
> field_spec);
> + if (rc != 0)
> + return rc;
> +
> + goto done;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "spec");
> + if (rc != 0)
> + return rc;
> +
> + rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
> + if (rc != 0)
> + return rc;
> +
> + if (!last_is_futile) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + field_name);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "last");
> + if (rc != 0)
> + return rc;
> +
> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
> field_last);
> + if (rc != 0)
> + return rc;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> field_name);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "mask");
> + if (rc != 0)
> + return rc;
> +
> + rc = mask_dump_cb(buf, buf_size, nb_chars_total, field_mask);
> + if (rc != 0)
> + return rc;
> +
> +done:
> + /*
> + * Zeroise the printed field. When all item fields have been printed,
> + * the corresponding item handler will make sure that the whole item
> + * mask is all-zeros. This is needed to highlight unsupported fields.
> + *
> + * If the provided field mask pointer refers to a separate container
> + * rather than to the field in the item mask directly, it's the duty
> + * of the item handler to clear the field in the item mask correctly.
> + */
> + memset(field_mask, 0, field_size);
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_vf(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_vf *spec = spec_ptr;
> + struct rte_flow_item_vf *last = last_ptr;
> + struct rte_flow_item_vf *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex32, "id",
> + sizeof(spec->id), &spec->id, &last-
> >id,
> + &mask->id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_phy_port(char *buf, size_t buf_size,
> + size_t *nb_chars_total, void *spec_ptr,
> + void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_phy_port *spec = spec_ptr;
> + struct rte_flow_item_phy_port *last = last_ptr;
> + struct rte_flow_item_phy_port *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex32, "index",
> + sizeof(spec->index), &spec->index,
> + &last->index, &mask->index, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_port_id(char *buf, size_t buf_size,
> + size_t *nb_chars_total, void *spec_ptr,
> + void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_port_id *spec = spec_ptr;
> + struct rte_flow_item_port_id *last = last_ptr;
> + struct rte_flow_item_port_id *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex32, "id",
> + sizeof(spec->id), &spec->id, &last-
> >id,
> + &mask->id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_eth(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_eth *spec = spec_ptr;
> + struct rte_flow_item_eth *last = last_ptr;
> + struct rte_flow_item_eth *mask = mask_ptr;
> + uint8_t has_vlan_full_mask = 1;
> + uint8_t has_vlan_spec;
> + uint8_t has_vlan_last;
> + uint8_t has_vlan_mask;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ether_addr,
> + rte_flow_snprint_ether_addr, "dst",
> + sizeof(spec->hdr.d_addr),
> + &spec->hdr.d_addr, &last-
> >hdr.d_addr,
> + &mask->hdr.d_addr, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ether_addr,
> + rte_flow_snprint_ether_addr, "src",
> + sizeof(spec->hdr.s_addr),
> + &spec->hdr.s_addr, &last-
> >hdr.s_addr,
> + &mask->hdr.s_addr, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "type",
> + sizeof(spec->hdr.ether_type),
> + &spec->hdr.ether_type,
> + &last->hdr.ether_type,
> + &mask->hdr.ether_type, NULL);
> + if (rc != 0)
> + return rc;
> +
> + has_vlan_spec = spec->has_vlan;
> + has_vlan_last = last->has_vlan;
> + has_vlan_mask = mask->has_vlan;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_uint8, "has_vlan",
> + sizeof(has_vlan_spec),
> &has_vlan_spec,
> + &has_vlan_last, &has_vlan_mask,
> + &has_vlan_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + mask->has_vlan = 0;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_vlan(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_vlan *spec = spec_ptr;
> + struct rte_flow_item_vlan *last = last_ptr;
> + struct rte_flow_item_vlan *mask = mask_ptr;
> + uint8_t has_more_vlan_full_mask = 1;
> + uint8_t has_more_vlan_spec;
> + uint8_t has_more_vlan_last;
> + uint8_t has_more_vlan_mask;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "tci",
> + sizeof(spec->hdr.vlan_tci),
> + &spec->hdr.vlan_tci,
> + &last->hdr.vlan_tci,
> + &mask->hdr.vlan_tci, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> + "inner_type",
> + sizeof(spec->hdr.eth_proto),
> + &spec->hdr.eth_proto,
> + &last->hdr.eth_proto,
> + &mask->hdr.eth_proto, NULL);
> + if (rc != 0)
> + return rc;
> +
> + has_more_vlan_spec = spec->has_more_vlan;
> + has_more_vlan_last = last->has_more_vlan;
> + has_more_vlan_mask = mask->has_more_vlan;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_uint8,
> + "has_more_vlan",
> + sizeof(has_more_vlan_spec),
> + &has_more_vlan_spec,
> + &has_more_vlan_last,
> + &has_more_vlan_mask,
> + &has_more_vlan_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + mask->has_more_vlan = 0;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_ipv4(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_ipv4 *spec = spec_ptr;
> + struct rte_flow_item_ipv4 *last = last_ptr;
> + struct rte_flow_item_ipv4 *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex8,
> + rte_flow_snprint_hex8, "tos",
> + sizeof(spec->hdr.type_of_service),
> + &spec->hdr.type_of_service,
> + &last->hdr.type_of_service,
> + &mask->hdr.type_of_service,
> NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> + "packet_id",
> + sizeof(spec->hdr.packet_id),
> + &spec->hdr.packet_id,
> + &last->hdr.packet_id,
> + &mask->hdr.packet_id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> + "fragment_offset",
> + sizeof(spec->hdr.fragment_offset),
> + &spec->hdr.fragment_offset,
> + &last->hdr.fragment_offset,
> + &mask->hdr.fragment_offset,
> NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_hex8, "ttl",
> + sizeof(spec->hdr.time_to_live),
> + &spec->hdr.time_to_live,
> + &last->hdr.time_to_live,
> + &mask->hdr.time_to_live, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_hex8, "proto",
> + sizeof(spec->hdr.next_proto_id),
> + &spec->hdr.next_proto_id,
> + &last->hdr.next_proto_id,
> + &mask->hdr.next_proto_id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ipv4_addr,
> + rte_flow_snprint_ipv4_addr, "src",
> + sizeof(spec->hdr.src_addr),
> + &spec->hdr.src_addr,
> + &last->hdr.src_addr,
> + &mask->hdr.src_addr, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ipv4_addr,
> + rte_flow_snprint_ipv4_addr, "dst",
> + sizeof(spec->hdr.dst_addr),
> + &spec->hdr.dst_addr,
> + &last->hdr.dst_addr,
> + &mask->hdr.dst_addr, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_ipv6(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + uint32_t tc_full_mask = (RTE_IPV6_HDR_TC_MASK >>
> RTE_IPV6_HDR_TC_SHIFT);
> + uint32_t fl_full_mask = (RTE_IPV6_HDR_FL_MASK >>
> RTE_IPV6_HDR_FL_SHIFT);
> + struct rte_flow_item_ipv6 *spec = spec_ptr;
> + struct rte_flow_item_ipv6 *last = last_ptr;
> + struct rte_flow_item_ipv6 *mask = mask_ptr;
> + uint8_t has_frag_ext_full_mask = 1;
> + uint8_t has_frag_ext_spec;
> + uint8_t has_frag_ext_last;
> + uint8_t has_frag_ext_mask;
> + uint32_t vtc_flow;
> + uint32_t fl_spec;
> + uint32_t fl_last;
> + uint32_t fl_mask;
> + uint32_t tc_spec;
> + uint32_t tc_last;
> + uint32_t tc_mask;
> + int rc;
> +
> + vtc_flow = rte_be_to_cpu_32(spec->hdr.vtc_flow);
> + tc_spec = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> RTE_IPV6_HDR_TC_SHIFT;
> + fl_spec = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> RTE_IPV6_HDR_FL_SHIFT;
> +
> + vtc_flow = rte_be_to_cpu_32(last->hdr.vtc_flow);
> + tc_last = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> RTE_IPV6_HDR_TC_SHIFT;
> + fl_last = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> RTE_IPV6_HDR_FL_SHIFT;
> +
> + vtc_flow = rte_be_to_cpu_32(mask->hdr.vtc_flow);
> + tc_mask = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> RTE_IPV6_HDR_TC_SHIFT;
> + fl_mask = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> RTE_IPV6_HDR_FL_SHIFT;
> +
> + mask->hdr.vtc_flow &=
> ~rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK |
> + RTE_IPV6_HDR_FL_MASK);
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex8,
> + rte_flow_snprint_hex8, "tc",
> + sizeof(tc_spec), &tc_spec, &tc_last,
> + &tc_mask, &tc_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex20, "flow",
> + sizeof(fl_spec), &fl_spec, &fl_last,
> + &fl_mask, &fl_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_hex8, "proto",
> + sizeof(spec->hdr.proto),
> + &spec->hdr.proto,
> + &last->hdr.proto,
> + &mask->hdr.proto, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_hex8, "hop",
> + sizeof(spec->hdr.hop_limits),
> + &spec->hdr.hop_limits,
> + &last->hdr.hop_limits,
> + &mask->hdr.hop_limits, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ipv6_addr,
> + rte_flow_snprint_ipv6_addr, "src",
> + sizeof(spec->hdr.src_addr),
> + &spec->hdr.src_addr,
> + &last->hdr.src_addr,
> + &mask->hdr.src_addr, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_ipv6_addr,
> + rte_flow_snprint_ipv6_addr, "dst",
> + sizeof(spec->hdr.dst_addr),
> + &spec->hdr.dst_addr,
> + &last->hdr.dst_addr,
> + &mask->hdr.dst_addr, NULL);
> +
> + has_frag_ext_spec = spec->has_frag_ext;
> + has_frag_ext_last = last->has_frag_ext;
> + has_frag_ext_mask = mask->has_frag_ext;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint8,
> + rte_flow_snprint_uint8,
> "has_frag_ext",
> + sizeof(has_frag_ext_spec),
> + &has_frag_ext_spec,
> &has_frag_ext_last,
> + &has_frag_ext_mask,
> + &has_frag_ext_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + mask->has_frag_ext = 0;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_udp(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_udp *spec = spec_ptr;
> + struct rte_flow_item_udp *last = last_ptr;
> + struct rte_flow_item_udp *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "src",
> + sizeof(spec->hdr.src_port),
> + &spec->hdr.src_port,
> + &last->hdr.src_port,
> + &mask->hdr.src_port, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "dst",
> + sizeof(spec->hdr.dst_port),
> + &spec->hdr.dst_port,
> + &last->hdr.dst_port,
> + &mask->hdr.dst_port, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_tcp(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_tcp *spec = spec_ptr;
> + struct rte_flow_item_tcp *last = last_ptr;
> + struct rte_flow_item_tcp *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "src",
> + sizeof(spec->hdr.src_port),
> + &spec->hdr.src_port,
> + &last->hdr.src_port,
> + &mask->hdr.src_port, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> "dst",
> + sizeof(spec->hdr.dst_port),
> + &spec->hdr.dst_port,
> + &last->hdr.dst_port,
> + &mask->hdr.dst_port, NULL);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex8,
> + rte_flow_snprint_hex8, "flags",
> + sizeof(spec->hdr.tcp_flags),
> + &spec->hdr.tcp_flags,
> + &last->hdr.tcp_flags,
> + &mask->hdr.tcp_flags, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_vxlan(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_vxlan *spec = spec_ptr;
> + struct rte_flow_item_vxlan *last = last_ptr;
> + struct rte_flow_item_vxlan *mask = mask_ptr;
> + uint32_t vni_full_mask = 0xffffff;
> + uint32_t vni_spec;
> + uint32_t vni_last;
> + uint32_t vni_mask;
> + int rc;
> +
> + vni_spec = rte_be_to_cpu_32(spec->hdr.vx_vni) >> 8;
> + vni_last = rte_be_to_cpu_32(last->hdr.vx_vni) >> 8;
> + vni_mask = rte_be_to_cpu_32(mask->hdr.vx_vni) >> 8;
> +
> + mask->hdr.vx_vni &= ~RTE_BE32(0xffffff00);
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex24, "vni",
> + sizeof(vni_spec), &vni_spec,
> + &vni_last, &vni_mask,
> + &vni_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_nvgre(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_nvgre *spec = spec_ptr;
> + struct rte_flow_item_nvgre *last = last_ptr;
> + struct rte_flow_item_nvgre *mask = mask_ptr;
> + uint32_t *tni_and_flow_id_specp = (uint32_t *)spec->tni;
> + uint32_t *tni_and_flow_id_lastp = (uint32_t *)last->tni;
> + uint32_t *tni_and_flow_id_maskp = (uint32_t *)mask->tni;
> + uint32_t tni_full_mask = 0xffffff;
> + uint32_t tni_spec;
> + uint32_t tni_last;
> + uint32_t tni_mask;
> + int rc;
> +
> + tni_spec = rte_be_to_cpu_32(*tni_and_flow_id_specp) >> 8;
> + tni_last = rte_be_to_cpu_32(*tni_and_flow_id_lastp) >> 8;
> + tni_mask = rte_be_to_cpu_32(*tni_and_flow_id_maskp) >> 8;
> +
> + memset(mask->tni, 0, sizeof(mask->tni));
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex24, "tni",
> + sizeof(tni_spec), &tni_spec,
> + &tni_last, &tni_mask,
> + &tni_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_geneve(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_geneve *spec = spec_ptr;
> + struct rte_flow_item_geneve *last = last_ptr;
> + struct rte_flow_item_geneve *mask = mask_ptr;
> + uint32_t *vni_and_rsvd_specp = (uint32_t *)spec->vni;
> + uint32_t *vni_and_rsvd_lastp = (uint32_t *)last->vni;
> + uint32_t *vni_and_rsvd_maskp = (uint32_t *)mask->vni;
> + uint32_t vni_full_mask = 0xffffff;
> + uint16_t optlen_full_mask = 0x3f;
> + uint16_t optlen_spec;
> + uint16_t optlen_last;
> + uint16_t optlen_mask;
> + uint32_t vni_spec;
> + uint32_t vni_last;
> + uint32_t vni_mask;
> + int rc;
> +
> + optlen_spec = rte_be_to_cpu_16(spec->ver_opt_len_o_c_rsvd0) &
> 0x3f00;
> + optlen_spec >>= 8;
> +
> + optlen_last = rte_be_to_cpu_16(last->ver_opt_len_o_c_rsvd0) &
> 0x3f00;
> + optlen_last >>= 8;
> +
> + optlen_mask = rte_be_to_cpu_16(mask->ver_opt_len_o_c_rsvd0)
> & 0x3f00;
> + optlen_mask >>= 8;
> +
> + mask->ver_opt_len_o_c_rsvd0 &= ~RTE_BE16(0x3f00);
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16,
> + rte_flow_snprint_hex8, "optlen",
> + sizeof(optlen_spec), &optlen_spec,
> + &optlen_last, &optlen_mask,
> + &optlen_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_hex16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> + "protocol", sizeof(spec->protocol),
> + &spec->protocol, &last->protocol,
> + &mask->protocol, NULL);
> + if (rc != 0)
> + return rc;
> +
> + vni_spec = rte_be_to_cpu_32(*vni_and_rsvd_specp) >> 8;
> + vni_last = rte_be_to_cpu_32(*vni_and_rsvd_lastp) >> 8;
> + vni_mask = rte_be_to_cpu_32(*vni_and_rsvd_maskp) >> 8;
> +
> + memset(mask->vni, 0, sizeof(mask->vni));
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex24, "vni",
> + sizeof(vni_spec), &vni_spec,
> + &vni_last, &vni_mask,
> + &vni_full_mask);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_mark(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_mark *spec = spec_ptr;
> + struct rte_flow_item_mark *last = last_ptr;
> + struct rte_flow_item_mark *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint32,
> + rte_flow_snprint_hex32, "id",
> + sizeof(spec->id), &spec->id,
> + &last->id, &mask->id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_item_pppoed(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> + struct rte_flow_item_pppoe *spec = spec_ptr;
> + struct rte_flow_item_pppoe *last = last_ptr;
> + struct rte_flow_item_pppoe *mask = mask_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> + rte_flow_snprint_uint16_be2cpu,
> + rte_flow_snprint_hex16_be2cpu,
> + "seid", sizeof(spec->session_id),
> + &spec->session_id, &last-
> >session_id,
> + &mask->session_id, NULL);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static const struct {
> + const char *name;
> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_char_total,
> + void *spec_ptr, void *last_ptr, void *mask_ptr);
> + size_t size;
> +} item_table[] = {
> + [RTE_FLOW_ITEM_TYPE_VOID] = {
> + .name = "void"
> + },
> + [RTE_FLOW_ITEM_TYPE_PF] = {
> + .name = "pf"
> + },
> + [RTE_FLOW_ITEM_TYPE_PPPOES] = {
> + .name = "pppoes"
> + },
> + [RTE_FLOW_ITEM_TYPE_PPPOED] = {
> + .name = "pppoed",
> + .parse_cb = rte_flow_snprint_item_pppoed,
> + .size = sizeof(struct rte_flow_item_pppoe)
> + },
> +
> +#define ITEM(_name_uppercase, _name_lowercase) \
> + [RTE_FLOW_ITEM_TYPE_##_name_uppercase] = {
> \
> + .name = #_name_lowercase, \
> + .parse_cb = rte_flow_snprint_item_##_name_lowercase,
> \
> + .size = sizeof(struct rte_flow_item_##_name_lowercase)
> \
> + }
> +
> + ITEM(VF, vf),
> + ITEM(PHY_PORT, phy_port),
> + ITEM(PORT_ID, port_id),
> + ITEM(ETH, eth),
> + ITEM(VLAN, vlan),
> + ITEM(IPV4, ipv4),
> + ITEM(IPV6, ipv6),
> + ITEM(UDP, udp),
> + ITEM(TCP, tcp),
> + ITEM(VXLAN, vxlan),
> + ITEM(NVGRE, nvgre),
> + ITEM(GENEVE, geneve),
> + ITEM(MARK, mark),
> +
> +#undef ITEM
> +};
> +
> +static int
> +rte_flow_snprint_item(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_item *item) {
> + int rc;
> +
> + if (item->type < 0 || item->type >= RTE_DIM(item_table) ||
> + item_table[item->type].name == NULL) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "{unknown}");
> + if (rc != 0)
> + return rc;
> +
> + goto out;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + item_table[item->type].name);
> + if (rc != 0)
> + return rc;
> +
> + if (item_table[item->type].parse_cb != NULL) {
> + size_t item_size = item_table[item->type].size;
> + uint8_t spec[item_size];
> + uint8_t last[item_size];
> + uint8_t mask[item_size];
> +
> + rte_flow_item_init_parse(item, item_size, spec, last, mask);
> +
> + rc = item_table[item->type].parse_cb(buf, buf_size,
> + nb_chars_total,
> + spec, last, mask);
> + if (rc != 0)
> + return rc;
> +
> + if (!rte_flow_buf_is_all_zeros(mask, item_size)) {
> + rc = rte_flow_snprint_str(buf, buf_size,
> + nb_chars_total,
> + "{unknown bits}");
> + if (rc != 0)
> + return rc;
> + }
> + }
> +
> +out:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_pattern(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const struct rte_flow_item pattern[]) {
> + const struct rte_flow_item *item;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "pattern");
> + if (rc != 0)
> + return rc;
> +
> + if (pattern == NULL)
> + goto end;
> +
> + for (item = pattern;
> + item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
> + rc = rte_flow_snprint_item(buf, buf_size, nb_chars_total,
> item);
> + if (rc != 0)
> + return rc;
> + }
> +
> +end:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_jump(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_jump *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "group");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> + &conf->group);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_mark(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_mark *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >id);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_queue(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void *conf_ptr) {
> + const struct rte_flow_action_queue *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> + &conf->index);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_count(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void *conf_ptr) {
> + const struct rte_flow_action_count *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "identifier");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >id);
> + if (rc != 0)
> + return rc;
> +
> + if (conf->shared) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "shared");
> + if (rc != 0)
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_rss_func(char *buf, size_t buf_size,
> + size_t *nb_chars_total,
> + enum rte_eth_hash_function func)
> +{
> + int rc;
> +
> + if (func == RTE_ETH_HASH_FUNCTION_DEFAULT)
> + return 0;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "func");
> + if (rc != 0)
> + return rc;
> +
> + switch (func) {
> + case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "toeplitz");
> + break;
> + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "simple_xor");
> + break;
> + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "symmetric_toeplitz");
> + break;
> + default:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "{unknown}");
> + break;
> + }
> +
> + return rc;
> +}
> +
> +static int
> +rte_flow_snprint_action_rss_level(char *buf, size_t buf_size,
> + size_t *nb_chars_total, uint32_t level) {
> + int rc;
> +
> + if (level == 0)
> + return 0;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "level");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &level);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static const struct {
> + const char *name;
> + uint64_t flag;
> +} rss_type_table[] = {
> + { "ipv4", ETH_RSS_IPV4 },
> + { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
> + { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
> + { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
> + { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
> + { "ipv6", ETH_RSS_IPV6 },
> + { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
> + { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
> + { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
> + { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
> + { "ipv6-ex", ETH_RSS_IPV6_EX },
> + { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
> + { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
> +
> + { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
> + { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
> + { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
> + { "l4-dst-only", ETH_RSS_L4_DST_ONLY }, };
> +
> +static int
> +rte_flow_snprint_action_rss_types(char *buf, size_t buf_size,
> + size_t *nb_chars_total, uint64_t types) {
> + unsigned int i;
> + int rc;
> +
> + if (types == 0)
> + return 0;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "types");
> + if (rc != 0)
> + return rc;
> +
> + for (i = 0; i < RTE_DIM(rss_type_table); ++i) {
> + uint64_t flag = rss_type_table[i].flag;
> +
> + if ((types & flag) == 0)
> + continue;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + rss_type_table[i].name);
> + if (rc != 0)
> + return rc;
> +
> + types &= ~flag;
> + }
> +
> + if (types != 0) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "{unknown}");
> + if (rc != 0)
> + return rc;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_rss_queues(char *buf, size_t buf_size,
> + size_t *nb_chars_total,
> + const uint16_t *queues,
> + unsigned int nb_queues)
> +{
> + unsigned int i;
> + int rc;
> +
> + if (nb_queues == 0)
> + return 0;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "queues");
> + if (rc != 0)
> + return rc;
> +
> + for (i = 0; i < nb_queues; ++i) {
> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> + &queues[i]);
> + if (rc != 0)
> + return rc;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_rss(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_rss *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_action_rss_func(buf, buf_size, nb_chars_total,
> + conf->func);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_action_rss_level(buf, buf_size,
> nb_chars_total,
> + conf->level);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_action_rss_types(buf, buf_size,
> nb_chars_total,
> + conf->types);
> + if (rc != 0)
> + return rc;
> +
> + if (conf->key_len != 0) {
> + if (conf->key != NULL) {
> + unsigned int i;
> +
> + rc = rte_flow_snprint_str(buf, buf_size,
> nb_chars_total,
> + "" /* results in space */);
> + if (rc != 0)
> + return rc;
> +
> + for (i = 0; i < conf->key_len; ++i) {
> + rc = rte_flow_snprint_byte(buf, buf_size,
> + nb_chars_total,
> + &conf->key[i]);
> + if (rc != 0)
> + return rc;
> + }
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "key_len");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> + &conf->key_len);
> + if (rc != 0)
> + return rc;
> + }
> +
> + rc = rte_flow_snprint_action_rss_queues(buf, buf_size,
> nb_chars_total,
> + conf->queue, conf-
> >queue_num);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_vf(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_vf *conf = conf_ptr;
> + int rc;
> +
> + if (conf->original) {
> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "original on");
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >id);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_phy_port(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void
> *conf_ptr) {
> + const struct rte_flow_action_phy_port *conf = conf_ptr;
> + int rc;
> +
> + if (conf->original) {
> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "original on");
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> + &conf->index);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_port_id(char *buf, size_t buf_size,
> + size_t *nb_chars_total, const void *conf_ptr)
> {
> + const struct rte_flow_action_port_id *conf = conf_ptr;
> + int rc;
> +
> + if (conf->original) {
> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "original on");
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >id);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_of_push_vlan(char *buf, size_t buf_size,
> + size_t *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_of_push_vlan *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> "ethertype");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_hex16_be2cpu(buf, buf_size, nb_chars_total,
> + &conf->ethertype);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_of_set_vlan_vid(char *buf, size_t buf_size,
> + size_t *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_of_set_vlan_vid *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_vid");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint16_be2cpu(buf, buf_size, nb_chars_total,
> + &conf->vlan_vid);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_action_of_set_vlan_pcp(char *buf, size_t buf_size,
> + size_t *nb_chars_total,
> + const void *conf_ptr)
> +{
> + const struct rte_flow_action_of_set_vlan_pcp *conf = conf_ptr;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_pcp");
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_uint8(buf, buf_size, nb_chars_total,
> + &conf->vlan_pcp);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static const struct {
> + const char *name;
> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *conf_ptr);
> +} action_table[] = {
> + [RTE_FLOW_ACTION_TYPE_VOID] = {
> + .name = "void"
> + },
> + [RTE_FLOW_ACTION_TYPE_FLAG] = {
> + .name = "flag"
> + },
> + [RTE_FLOW_ACTION_TYPE_DROP] = {
> + .name = "drop"
> + },
> + [RTE_FLOW_ACTION_TYPE_PF] = {
> + .name = "pf"
> + },
> + [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
> + .name = "of_pop_vlan"
> + },
> + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
> + .name = "vxlan_encap"
> + },
> + [RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
> + .name = "vxlan_decap"
> + },
> +
> +#define ACTION(_name_uppercase, _name_lowercase) \
> + [RTE_FLOW_ACTION_TYPE_##_name_uppercase] = {
> \
> + .name = #_name_lowercase, \
> + .parse_cb = rte_flow_snprint_action_##_name_lowercase,
> \
> + }
> +
> + ACTION(JUMP, jump),
> + ACTION(MARK, mark),
> + ACTION(QUEUE, queue),
> + ACTION(COUNT, count),
> + ACTION(RSS, rss),
> + ACTION(VF, vf),
> + ACTION(PHY_PORT, phy_port),
> + ACTION(PORT_ID, port_id),
> + ACTION(OF_PUSH_VLAN, of_push_vlan),
> + ACTION(OF_SET_VLAN_VID, of_set_vlan_vid),
> + ACTION(OF_SET_VLAN_PCP, of_set_vlan_pcp),
> +
> +#undef ACTION
> +};
> +
> +static int
> +rte_flow_snprint_action(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_action *action) {
> + int rc;
> +
> + if (action->type < 0 || action->type >= RTE_DIM(action_table) ||
> + action_table[action->type].name == NULL) {
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + "{unknown}");
> + if (rc != 0)
> + return rc;
> +
> + goto out;
> + }
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> + action_table[action->type].name);
> + if (rc != 0)
> + return rc;
> +
> + if (action_table[action->type].parse_cb != NULL &&
> + action->conf != NULL) {
> + rc = action_table[action->type].parse_cb(buf, buf_size,
> + nb_chars_total,
> + action->conf);
> + if (rc != 0)
> + return rc;
> + }
> +
> +out:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +static int
> +rte_flow_snprint_actions(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> + const struct rte_flow_action actions[]) {
> + const struct rte_flow_action *action;
> + int rc;
> +
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "actions");
> + if (rc != 0)
> + return rc;
> +
> + if (actions == NULL)
> + goto end;
> +
> + for (action = actions;
> + action->type != RTE_FLOW_ACTION_TYPE_END; ++action) {
> + rc = rte_flow_snprint_action(buf, buf_size, nb_chars_total,
> + action);
> + if (rc != 0)
> + return rc;
> + }
> +
> +end:
> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> +
> +int
> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[]) {
> + int rc;
> +
> + if (buf == NULL && buf_size != 0)
> + return -EINVAL;
> +
> + *nb_chars_total = 0;
> +
> + rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total,
> pattern);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total,
> actions);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> 44d30b05ae..a626cac944 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -249,6 +249,9 @@ EXPERIMENTAL {
> rte_mtr_meter_policy_delete;
> rte_mtr_meter_policy_update;
> rte_mtr_meter_policy_validate;
> +
> + # added in 21.08
> + rte_flow_snprint;
> };
>
> INTERNAL {
> --
> 2.20.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-30 7:27 ` Ori Kam
@ 2021-05-31 2:28 ` Stephen Hemminger
2021-06-01 14:17 ` Ivan Malov
2021-06-01 14:08 ` Ivan Malov
2021-06-02 20:48 ` Stephen Hemminger
2 siblings, 1 reply; 16+ messages in thread
From: Stephen Hemminger @ 2021-05-31 2:28 UTC (permalink / raw)
To: Ori Kam
Cc: Ivan Malov, dev, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, Ray Kinsella, Neil Horman
On Sun, 30 May 2021 07:27:32 +0000
Ori Kam <orika@nvidia.com> wrote:
> >
> > DPDK applications (for example, OvS) or tests which use RTE flow API need to
> > log created or rejected flow rules to help to recognise what goes right or
> > wrong. From this standpoint, testpmd-compliant format is nice for the
> > purpose because it allows to copy-paste the flow rules and debug using
> > testpmd.
> >
> > Recognisable pattern items:
> > VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP, VXLAN,
> > NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >
> > Recognisable actions:
> > VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
> > PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> > OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >
> > Recognisable RSS types (action RSS):
> > IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> > NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> > NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
> > IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY, L4_DST_ONLY.
> >
> > Unrecognised parts of the flow specification are represented by tokens
> > "{unknown}" and "{unknown bits}". Interested parties are welcome to
> > extend this tool to recognise more items and actions.
> >
> > Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> > ---
> > lib/ethdev/meson.build | 1 +
> > lib/ethdev/rte_flow.h | 33 +
> > lib/ethdev/rte_flow_snprint.c | 1681
> > +++++++++++++++++++++++++++++++++
> > lib/ethdev/version.map | 3 +
> > 4 files changed, 1718 insertions(+)
> > create mode 100644 lib/ethdev/rte_flow_snprint.c
> >
> > diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
> > 0205c853df..97bba4fa1b 100644
> > --- a/lib/ethdev/meson.build
> > +++ b/lib/ethdev/meson.build
> > @@ -8,6 +8,7 @@ sources = files(
> > 'rte_class_eth.c',
> > 'rte_ethdev.c',
> > 'rte_flow.c',
> > + 'rte_flow_snprint.c',
> > 'rte_mtr.c',
> > 'rte_tm.c',
> > )
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > 961a5884fe..cd5e9ef631 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> > struct rte_flow_item *items,
> > uint32_t num_of_items,
> > struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Dump testpmd-compliant textual representation of the flow rule.
> > + * Invoke this with zero-size buffer to learn the string size and
> > + * invoke this for the second time to actually dump the flow rule.
> > + * The buffer size on the second invocation = the string size + 1.
> > + *
> > + * @param[out] buf
> > + * Buffer to save the dump in, or NULL
> > + * @param buf_size
> > + * Buffer size, or 0
> > + * @param[out] nb_chars_total
> > + * Resulting string size (excluding the terminating null byte)
> > + * @param[in] attr
> > + * Flow rule attributes.
> > + * @param[in] pattern
> > + * Pattern specification (list terminated by the END pattern item).
> > + * @param[in] actions
> > + * Associated actions (list terminated by the END action).
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> > + const struct rte_flow_attr *attr,
> > + const struct rte_flow_item pattern[],
> > + const struct rte_flow_action actions[]);
> > +
The code would be clearer and simpler if you adopted the same return value
as snprintf. Then lots of places could be just tail calls and the nb_chars_total
would be unnecessary.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-30 7:27 ` Ori Kam
2021-05-31 2:28 ` Stephen Hemminger
@ 2021-06-01 14:08 ` Ivan Malov
2021-06-02 13:32 ` Ori Kam
2021-06-02 20:48 ` Stephen Hemminger
2 siblings, 1 reply; 16+ messages in thread
From: Ivan Malov @ 2021-06-01 14:08 UTC (permalink / raw)
To: Ori Kam, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Ray Kinsella, Neil Horman
Hi Ori,
Your review efforts are much appreciated. I understand your concern
about the partial item/action coverage, but there are some points to be
considered when addressing it:
- It's anyway hardly possible to use the printed flow directly in
testpmd if it contains "opaque", or "PMD-specific", items/actions in
terms of the tunnel offload model. These items/actions have to be
omitted when printing the flow, and their absence in the resulting
string means that copy/pasting the flow to testpmd isn't helpful in this
particular case.
- There's action ENCAP which also can't be fully represented by the tool
in question, simply because it has no parameters. In tespmd, one first
has to issue "set vxlan" command to configure the encap. header, whilst
"vxlan" token in the flow rule string just refers to the previously set
encap. parameters. The suggested flow print helper can't reliably print
these two components ("set vxlan" and the flow rule itself) as they
belong to different testpmd command strings.
As you might see, completeness of the solution wouldn't necessarily be
reachable, even if full item/action coverage was provided.
As for the item/action coverage itself, it's rather controversial. On
the one hand, yes, we should probably try to cover more items and
actions in the suggested patch, to the extent allowed by our current
priorities. But on the other hand, the existing coverage might not be
that poor: it's fairly elaborate and at least allows to print the most
common flow rules.
Yes, macros and some other cunning ways to cover more flow specifics
might come in handy, but, at the same time, can be rather error prone.
Sometimes it's more robust to just write the code out in full.
Thank you.
On 30/05/2021 10:27, Ori Kam wrote:
> Hi Ivan,
>
> First nice idea and thanks for the picking up the ball.
>
> Before a detail review,
> The main thing I'm concerned about is that this print will be partially supported,
> I know that you covered this issue by printing unknown for unsupported item/actions,
> but this will mean that it is enough that one item/action is not supported and already the
> flow can't be used in testpmd.
> To get full support it means that the developer needs to add such print with each new
> item/action. I agree it is possible, but it has high overhead for each feature.
>
> Maybe we should somehow create a macros for the prints or other easier to support ways.
>
> For example, just printing the ipv4 has 7 function calls inside of it each one with error checking,
> and I'm not counting the dedicated functions.
>
>
>
> Best,
> Ori
>
>
>> -----Original Message-----
>> From: Ivan Malov <ivan.malov@oktetlabs.ru>
>> Sent: Thursday, May 27, 2021 11:25 AM
>> To: dev@dpdk.org
>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
>> <ferruh.yigit@intel.com>; Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
>> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
>> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow rule
>> dumping
>>
>> DPDK applications (for example, OvS) or tests which use RTE flow API need to
>> log created or rejected flow rules to help to recognise what goes right or
>> wrong. From this standpoint, testpmd-compliant format is nice for the
>> purpose because it allows to copy-paste the flow rules and debug using
>> testpmd.
>>
>> Recognisable pattern items:
>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP, VXLAN,
>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>>
>> Recognisable actions:
>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>>
>> Recognisable RSS types (action RSS):
>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY, L4_DST_ONLY.
>>
>> Unrecognised parts of the flow specification are represented by tokens
>> "{unknown}" and "{unknown bits}". Interested parties are welcome to
>> extend this tool to recognise more items and actions.
>>
>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
>> ---
>> lib/ethdev/meson.build | 1 +
>> lib/ethdev/rte_flow.h | 33 +
>> lib/ethdev/rte_flow_snprint.c | 1681
>> +++++++++++++++++++++++++++++++++
>> lib/ethdev/version.map | 3 +
>> 4 files changed, 1718 insertions(+)
>> create mode 100644 lib/ethdev/rte_flow_snprint.c
>>
>> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
>> 0205c853df..97bba4fa1b 100644
>> --- a/lib/ethdev/meson.build
>> +++ b/lib/ethdev/meson.build
>> @@ -8,6 +8,7 @@ sources = files(
>> 'rte_class_eth.c',
>> 'rte_ethdev.c',
>> 'rte_flow.c',
>> + 'rte_flow_snprint.c',
>> 'rte_mtr.c',
>> 'rte_tm.c',
>> )
>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
>> 961a5884fe..cd5e9ef631 100644
>> --- a/lib/ethdev/rte_flow.h
>> +++ b/lib/ethdev/rte_flow.h
>> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
>> struct rte_flow_item *items,
>> uint32_t num_of_items,
>> struct rte_flow_error *error);
>> +
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice
>> + *
>> + * Dump testpmd-compliant textual representation of the flow rule.
>> + * Invoke this with zero-size buffer to learn the string size and
>> + * invoke this for the second time to actually dump the flow rule.
>> + * The buffer size on the second invocation = the string size + 1.
>> + *
>> + * @param[out] buf
>> + * Buffer to save the dump in, or NULL
>> + * @param buf_size
>> + * Buffer size, or 0
>> + * @param[out] nb_chars_total
>> + * Resulting string size (excluding the terminating null byte)
>> + * @param[in] attr
>> + * Flow rule attributes.
>> + * @param[in] pattern
>> + * Pattern specification (list terminated by the END pattern item).
>> + * @param[in] actions
>> + * Associated actions (list terminated by the END action).
>> + *
>> + * @return
>> + * 0 on success, a negative errno value otherwise
>> + */
>> +__rte_experimental
>> +int
>> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const struct rte_flow_attr *attr,
>> + const struct rte_flow_item pattern[],
>> + const struct rte_flow_action actions[]);
>> +
>> #ifdef __cplusplus
>> }
>> #endif
>> diff --git a/lib/ethdev/rte_flow_snprint.c b/lib/ethdev/rte_flow_snprint.c
>> new file mode 100644 index 0000000000..513886528b
>> --- /dev/null
>> +++ b/lib/ethdev/rte_flow_snprint.c
>> @@ -0,0 +1,1681 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + *
>> + * Copyright(c) 2021 Xilinx, Inc.
>> + */
>> +
>> +#include <stdbool.h>
>> +#include <stdint.h>
>> +#include <string.h>
>> +
>> +#include <rte_common.h>
>> +#include "rte_ethdev.h"
>> +#include "rte_flow.h"
>> +
>> +static int
>> +rte_flow_snprint_str(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + const char *str = value_ptr;
>> + size_t write_size_max;
>> + int retv;
>> +
>> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
>> + retv = snprintf(buf + *nb_chars_total, write_size_max, " %s", str);
>> + if (retv < 0)
>> + return -EFAULT;
>> +
>> + *nb_chars_total += retv;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_ether_addr(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + const struct rte_ether_addr *ea = value_ptr;
>> + const uint8_t *ab = ea->addr_bytes;
>> + size_t write_size_max;
>> + int retv;
>> +
>> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
>> + retv = snprintf(buf + *nb_chars_total, write_size_max,
>> + " %02x:%02x:%02x:%02x:%02x:%02x",
>> + ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
>> + if (retv < 0)
>> + return -EFAULT;
>> +
>> + *nb_chars_total += retv;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_ipv4_addr(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + char addr_str[INET_ADDRSTRLEN];
>> +
>> + if (inet_ntop(AF_INET, value_ptr, addr_str, sizeof(addr_str)) ==
>> NULL)
>> + return -EFAULT;
>> +
>> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
>> +}
>> +
>> +static int
>> +rte_flow_snprint_ipv6_addr(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + char addr_str[INET6_ADDRSTRLEN];
>> +
>> + if (inet_ntop(AF_INET6, value_ptr, addr_str, sizeof(addr_str)) ==
>> NULL)
>> + return -EFAULT;
>> +
>> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
>> +}
>> +
>> +#define SNPRINT(_type, _fmt) \
>> + do { \
>> + const _type *vp = value_ptr; \
>> + size_t write_size_max; \
>> + int retv; \
>> + \
>> + write_size_max = buf_size - \
>> + RTE_MIN(buf_size, *nb_chars_total); \
>> + retv = snprintf(buf + *nb_chars_total, write_size_max,
>> \
>> + _fmt, *vp); \
>> + if (retv < 0) \
>> + return -EFAULT;
>> \
>> + \
>> + *nb_chars_total += retv; \
>> + \
>> + return 0; \
>> + } while (0)
>> +
>> +static int
>> +rte_flow_snprint_uint32(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint32_t, " %u");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint32_t, " 0x%08x");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_hex24(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint32_t, " 0x%06x");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_hex20(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint32_t, " 0x%05x");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_uint16(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint16_t, " %hu");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_uint16_be2cpu(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void *value_ptr) {
>> + const uint16_t *valuep = value_ptr;
>> + uint16_t value = rte_be_to_cpu_16(*valuep);
>> +
>> + value_ptr = &value;
>> +
>> + SNPRINT(uint16_t, " %hu");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_hex16_be2cpu(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void *value_ptr) {
>> + const uint16_t *valuep = value_ptr;
>> + uint16_t value = rte_be_to_cpu_16(*valuep);
>> +
>> + value_ptr = &value;
>> +
>> + SNPRINT(uint16_t, " 0x%04x");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_uint8(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint8_t, " %hhu");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_hex8(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint8_t, " 0x%02x");
>> +}
>> +
>> +static int
>> +rte_flow_snprint_byte(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *value_ptr)
>> +{
>> + SNPRINT(uint8_t, "%02x");
>> +}
>> +
>> +#undef SNPRINT
>> +
>> +static int
>> +rte_flow_snprint_attr(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const struct rte_flow_attr *attr) {
>> + int rc;
>> +
>> + if (attr == NULL)
>> + return 0;
>> +
>> + if (attr->group != 0) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "group");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
>> + &attr->group);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + if (attr->priority != 0) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "priority");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
>> + &attr->priority);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + if (attr->transfer) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "transfer");
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + if (attr->ingress) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "ingress");
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + if (attr->egress) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "egress");
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static void
>> +rte_flow_item_init_parse(const struct rte_flow_item *item, size_t
>> item_size,
>> + void *spec, void *last, void *mask) {
>> + if (item->spec != NULL)
>> + memcpy(spec, item->spec, item_size);
>> + else
>> + memset(spec, 0, item_size);
>> +
>> + if (item->last != NULL)
>> + memcpy(last, item->last, item_size);
>> + else
>> + memset(last, 0, item_size);
>> +
>> + if (item->mask != NULL)
>> + memcpy(mask, item->mask, item_size);
>> + else
>> + memset(mask, 0, item_size);
>> +}
>> +
>> +static bool
>> +rte_flow_buf_is_all_zeros(const void *buf_ptr, size_t buf_size) {
>> + const uint8_t *buf = buf_ptr;
>> + unsigned int i;
>> + uint8_t t = 0;
>> +
>> + for (i = 0; i < buf_size; ++i)
>> + t |= buf[i];
>> +
>> + return (t == 0);
>> +}
>> +
>> +static bool
>> +rte_flow_buf_is_all_ones(const void *buf_ptr, size_t buf_size) {
>> + const uint8_t *buf = buf_ptr;
>> + unsigned int i;
>> + uint8_t t = ~0;
>> +
>> + for (i = 0; i < buf_size; ++i)
>> + t &= buf[i];
>> +
>> + return (t == (uint8_t)(~0));
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_field(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + int (*value_dump_cb)(char *, size_t, size_t *,
>> + const void *),
>> + int (*mask_dump_cb)(char *, size_t, size_t *,
>> + const void *),
>> + const char *field_name, size_t field_size,
>> + void *field_spec, void *field_last,
>> + void *field_mask, void *field_full_mask) {
>> + bool mask_is_all_ones;
>> + bool last_is_futile;
>> + int rc;
>> +
>> + if (rte_flow_buf_is_all_zeros(field_mask, field_size))
>> + return 0;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> field_name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (field_full_mask != NULL) {
>> + mask_is_all_ones = (memcmp(field_mask, field_full_mask,
>> + field_size) == 0);
>> + } else {
>> + mask_is_all_ones = rte_flow_buf_is_all_ones(field_mask,
>> + field_size);
>> + }
>> + last_is_futile = rte_flow_buf_is_all_zeros(field_last, field_size) ||
>> + (memcmp(field_spec, field_last, field_size) == 0);
>> +
>> + if (mask_is_all_ones && last_is_futile) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "is");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
>> field_spec);
>> + if (rc != 0)
>> + return rc;
>> +
>> + goto done;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "spec");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (!last_is_futile) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + field_name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "last");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
>> field_last);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> field_name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "mask");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = mask_dump_cb(buf, buf_size, nb_chars_total, field_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> +done:
>> + /*
>> + * Zeroise the printed field. When all item fields have been printed,
>> + * the corresponding item handler will make sure that the whole item
>> + * mask is all-zeros. This is needed to highlight unsupported fields.
>> + *
>> + * If the provided field mask pointer refers to a separate container
>> + * rather than to the field in the item mask directly, it's the duty
>> + * of the item handler to clear the field in the item mask correctly.
>> + */
>> + memset(field_mask, 0, field_size);
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_vf(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_vf *spec = spec_ptr;
>> + struct rte_flow_item_vf *last = last_ptr;
>> + struct rte_flow_item_vf *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex32, "id",
>> + sizeof(spec->id), &spec->id, &last-
>>> id,
>> + &mask->id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_phy_port(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, void *spec_ptr,
>> + void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_phy_port *spec = spec_ptr;
>> + struct rte_flow_item_phy_port *last = last_ptr;
>> + struct rte_flow_item_phy_port *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex32, "index",
>> + sizeof(spec->index), &spec->index,
>> + &last->index, &mask->index, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_port_id(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, void *spec_ptr,
>> + void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_port_id *spec = spec_ptr;
>> + struct rte_flow_item_port_id *last = last_ptr;
>> + struct rte_flow_item_port_id *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex32, "id",
>> + sizeof(spec->id), &spec->id, &last-
>>> id,
>> + &mask->id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_eth(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_eth *spec = spec_ptr;
>> + struct rte_flow_item_eth *last = last_ptr;
>> + struct rte_flow_item_eth *mask = mask_ptr;
>> + uint8_t has_vlan_full_mask = 1;
>> + uint8_t has_vlan_spec;
>> + uint8_t has_vlan_last;
>> + uint8_t has_vlan_mask;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ether_addr,
>> + rte_flow_snprint_ether_addr, "dst",
>> + sizeof(spec->hdr.d_addr),
>> + &spec->hdr.d_addr, &last-
>>> hdr.d_addr,
>> + &mask->hdr.d_addr, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ether_addr,
>> + rte_flow_snprint_ether_addr, "src",
>> + sizeof(spec->hdr.s_addr),
>> + &spec->hdr.s_addr, &last-
>>> hdr.s_addr,
>> + &mask->hdr.s_addr, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "type",
>> + sizeof(spec->hdr.ether_type),
>> + &spec->hdr.ether_type,
>> + &last->hdr.ether_type,
>> + &mask->hdr.ether_type, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + has_vlan_spec = spec->has_vlan;
>> + has_vlan_last = last->has_vlan;
>> + has_vlan_mask = mask->has_vlan;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_uint8, "has_vlan",
>> + sizeof(has_vlan_spec),
>> &has_vlan_spec,
>> + &has_vlan_last, &has_vlan_mask,
>> + &has_vlan_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + mask->has_vlan = 0;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_vlan(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_vlan *spec = spec_ptr;
>> + struct rte_flow_item_vlan *last = last_ptr;
>> + struct rte_flow_item_vlan *mask = mask_ptr;
>> + uint8_t has_more_vlan_full_mask = 1;
>> + uint8_t has_more_vlan_spec;
>> + uint8_t has_more_vlan_last;
>> + uint8_t has_more_vlan_mask;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "tci",
>> + sizeof(spec->hdr.vlan_tci),
>> + &spec->hdr.vlan_tci,
>> + &last->hdr.vlan_tci,
>> + &mask->hdr.vlan_tci, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> + "inner_type",
>> + sizeof(spec->hdr.eth_proto),
>> + &spec->hdr.eth_proto,
>> + &last->hdr.eth_proto,
>> + &mask->hdr.eth_proto, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + has_more_vlan_spec = spec->has_more_vlan;
>> + has_more_vlan_last = last->has_more_vlan;
>> + has_more_vlan_mask = mask->has_more_vlan;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_uint8,
>> + "has_more_vlan",
>> + sizeof(has_more_vlan_spec),
>> + &has_more_vlan_spec,
>> + &has_more_vlan_last,
>> + &has_more_vlan_mask,
>> + &has_more_vlan_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + mask->has_more_vlan = 0;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_ipv4(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_ipv4 *spec = spec_ptr;
>> + struct rte_flow_item_ipv4 *last = last_ptr;
>> + struct rte_flow_item_ipv4 *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex8,
>> + rte_flow_snprint_hex8, "tos",
>> + sizeof(spec->hdr.type_of_service),
>> + &spec->hdr.type_of_service,
>> + &last->hdr.type_of_service,
>> + &mask->hdr.type_of_service,
>> NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> + "packet_id",
>> + sizeof(spec->hdr.packet_id),
>> + &spec->hdr.packet_id,
>> + &last->hdr.packet_id,
>> + &mask->hdr.packet_id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> + "fragment_offset",
>> + sizeof(spec->hdr.fragment_offset),
>> + &spec->hdr.fragment_offset,
>> + &last->hdr.fragment_offset,
>> + &mask->hdr.fragment_offset,
>> NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_hex8, "ttl",
>> + sizeof(spec->hdr.time_to_live),
>> + &spec->hdr.time_to_live,
>> + &last->hdr.time_to_live,
>> + &mask->hdr.time_to_live, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_hex8, "proto",
>> + sizeof(spec->hdr.next_proto_id),
>> + &spec->hdr.next_proto_id,
>> + &last->hdr.next_proto_id,
>> + &mask->hdr.next_proto_id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ipv4_addr,
>> + rte_flow_snprint_ipv4_addr, "src",
>> + sizeof(spec->hdr.src_addr),
>> + &spec->hdr.src_addr,
>> + &last->hdr.src_addr,
>> + &mask->hdr.src_addr, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ipv4_addr,
>> + rte_flow_snprint_ipv4_addr, "dst",
>> + sizeof(spec->hdr.dst_addr),
>> + &spec->hdr.dst_addr,
>> + &last->hdr.dst_addr,
>> + &mask->hdr.dst_addr, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_ipv6(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + uint32_t tc_full_mask = (RTE_IPV6_HDR_TC_MASK >>
>> RTE_IPV6_HDR_TC_SHIFT);
>> + uint32_t fl_full_mask = (RTE_IPV6_HDR_FL_MASK >>
>> RTE_IPV6_HDR_FL_SHIFT);
>> + struct rte_flow_item_ipv6 *spec = spec_ptr;
>> + struct rte_flow_item_ipv6 *last = last_ptr;
>> + struct rte_flow_item_ipv6 *mask = mask_ptr;
>> + uint8_t has_frag_ext_full_mask = 1;
>> + uint8_t has_frag_ext_spec;
>> + uint8_t has_frag_ext_last;
>> + uint8_t has_frag_ext_mask;
>> + uint32_t vtc_flow;
>> + uint32_t fl_spec;
>> + uint32_t fl_last;
>> + uint32_t fl_mask;
>> + uint32_t tc_spec;
>> + uint32_t tc_last;
>> + uint32_t tc_mask;
>> + int rc;
>> +
>> + vtc_flow = rte_be_to_cpu_32(spec->hdr.vtc_flow);
>> + tc_spec = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
>> RTE_IPV6_HDR_TC_SHIFT;
>> + fl_spec = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
>> RTE_IPV6_HDR_FL_SHIFT;
>> +
>> + vtc_flow = rte_be_to_cpu_32(last->hdr.vtc_flow);
>> + tc_last = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
>> RTE_IPV6_HDR_TC_SHIFT;
>> + fl_last = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
>> RTE_IPV6_HDR_FL_SHIFT;
>> +
>> + vtc_flow = rte_be_to_cpu_32(mask->hdr.vtc_flow);
>> + tc_mask = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
>> RTE_IPV6_HDR_TC_SHIFT;
>> + fl_mask = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
>> RTE_IPV6_HDR_FL_SHIFT;
>> +
>> + mask->hdr.vtc_flow &=
>> ~rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK |
>> + RTE_IPV6_HDR_FL_MASK);
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex8,
>> + rte_flow_snprint_hex8, "tc",
>> + sizeof(tc_spec), &tc_spec, &tc_last,
>> + &tc_mask, &tc_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex20, "flow",
>> + sizeof(fl_spec), &fl_spec, &fl_last,
>> + &fl_mask, &fl_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_hex8, "proto",
>> + sizeof(spec->hdr.proto),
>> + &spec->hdr.proto,
>> + &last->hdr.proto,
>> + &mask->hdr.proto, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_hex8, "hop",
>> + sizeof(spec->hdr.hop_limits),
>> + &spec->hdr.hop_limits,
>> + &last->hdr.hop_limits,
>> + &mask->hdr.hop_limits, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ipv6_addr,
>> + rte_flow_snprint_ipv6_addr, "src",
>> + sizeof(spec->hdr.src_addr),
>> + &spec->hdr.src_addr,
>> + &last->hdr.src_addr,
>> + &mask->hdr.src_addr, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_ipv6_addr,
>> + rte_flow_snprint_ipv6_addr, "dst",
>> + sizeof(spec->hdr.dst_addr),
>> + &spec->hdr.dst_addr,
>> + &last->hdr.dst_addr,
>> + &mask->hdr.dst_addr, NULL);
>> +
>> + has_frag_ext_spec = spec->has_frag_ext;
>> + has_frag_ext_last = last->has_frag_ext;
>> + has_frag_ext_mask = mask->has_frag_ext;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint8,
>> + rte_flow_snprint_uint8,
>> "has_frag_ext",
>> + sizeof(has_frag_ext_spec),
>> + &has_frag_ext_spec,
>> &has_frag_ext_last,
>> + &has_frag_ext_mask,
>> + &has_frag_ext_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + mask->has_frag_ext = 0;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_udp(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_udp *spec = spec_ptr;
>> + struct rte_flow_item_udp *last = last_ptr;
>> + struct rte_flow_item_udp *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "src",
>> + sizeof(spec->hdr.src_port),
>> + &spec->hdr.src_port,
>> + &last->hdr.src_port,
>> + &mask->hdr.src_port, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "dst",
>> + sizeof(spec->hdr.dst_port),
>> + &spec->hdr.dst_port,
>> + &last->hdr.dst_port,
>> + &mask->hdr.dst_port, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_tcp(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_tcp *spec = spec_ptr;
>> + struct rte_flow_item_tcp *last = last_ptr;
>> + struct rte_flow_item_tcp *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "src",
>> + sizeof(spec->hdr.src_port),
>> + &spec->hdr.src_port,
>> + &last->hdr.src_port,
>> + &mask->hdr.src_port, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> "dst",
>> + sizeof(spec->hdr.dst_port),
>> + &spec->hdr.dst_port,
>> + &last->hdr.dst_port,
>> + &mask->hdr.dst_port, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex8,
>> + rte_flow_snprint_hex8, "flags",
>> + sizeof(spec->hdr.tcp_flags),
>> + &spec->hdr.tcp_flags,
>> + &last->hdr.tcp_flags,
>> + &mask->hdr.tcp_flags, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_vxlan(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_vxlan *spec = spec_ptr;
>> + struct rte_flow_item_vxlan *last = last_ptr;
>> + struct rte_flow_item_vxlan *mask = mask_ptr;
>> + uint32_t vni_full_mask = 0xffffff;
>> + uint32_t vni_spec;
>> + uint32_t vni_last;
>> + uint32_t vni_mask;
>> + int rc;
>> +
>> + vni_spec = rte_be_to_cpu_32(spec->hdr.vx_vni) >> 8;
>> + vni_last = rte_be_to_cpu_32(last->hdr.vx_vni) >> 8;
>> + vni_mask = rte_be_to_cpu_32(mask->hdr.vx_vni) >> 8;
>> +
>> + mask->hdr.vx_vni &= ~RTE_BE32(0xffffff00);
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex24, "vni",
>> + sizeof(vni_spec), &vni_spec,
>> + &vni_last, &vni_mask,
>> + &vni_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_nvgre(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_nvgre *spec = spec_ptr;
>> + struct rte_flow_item_nvgre *last = last_ptr;
>> + struct rte_flow_item_nvgre *mask = mask_ptr;
>> + uint32_t *tni_and_flow_id_specp = (uint32_t *)spec->tni;
>> + uint32_t *tni_and_flow_id_lastp = (uint32_t *)last->tni;
>> + uint32_t *tni_and_flow_id_maskp = (uint32_t *)mask->tni;
>> + uint32_t tni_full_mask = 0xffffff;
>> + uint32_t tni_spec;
>> + uint32_t tni_last;
>> + uint32_t tni_mask;
>> + int rc;
>> +
>> + tni_spec = rte_be_to_cpu_32(*tni_and_flow_id_specp) >> 8;
>> + tni_last = rte_be_to_cpu_32(*tni_and_flow_id_lastp) >> 8;
>> + tni_mask = rte_be_to_cpu_32(*tni_and_flow_id_maskp) >> 8;
>> +
>> + memset(mask->tni, 0, sizeof(mask->tni));
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex24, "tni",
>> + sizeof(tni_spec), &tni_spec,
>> + &tni_last, &tni_mask,
>> + &tni_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_geneve(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_geneve *spec = spec_ptr;
>> + struct rte_flow_item_geneve *last = last_ptr;
>> + struct rte_flow_item_geneve *mask = mask_ptr;
>> + uint32_t *vni_and_rsvd_specp = (uint32_t *)spec->vni;
>> + uint32_t *vni_and_rsvd_lastp = (uint32_t *)last->vni;
>> + uint32_t *vni_and_rsvd_maskp = (uint32_t *)mask->vni;
>> + uint32_t vni_full_mask = 0xffffff;
>> + uint16_t optlen_full_mask = 0x3f;
>> + uint16_t optlen_spec;
>> + uint16_t optlen_last;
>> + uint16_t optlen_mask;
>> + uint32_t vni_spec;
>> + uint32_t vni_last;
>> + uint32_t vni_mask;
>> + int rc;
>> +
>> + optlen_spec = rte_be_to_cpu_16(spec->ver_opt_len_o_c_rsvd0) &
>> 0x3f00;
>> + optlen_spec >>= 8;
>> +
>> + optlen_last = rte_be_to_cpu_16(last->ver_opt_len_o_c_rsvd0) &
>> 0x3f00;
>> + optlen_last >>= 8;
>> +
>> + optlen_mask = rte_be_to_cpu_16(mask->ver_opt_len_o_c_rsvd0)
>> & 0x3f00;
>> + optlen_mask >>= 8;
>> +
>> + mask->ver_opt_len_o_c_rsvd0 &= ~RTE_BE16(0x3f00);
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16,
>> + rte_flow_snprint_hex8, "optlen",
>> + sizeof(optlen_spec), &optlen_spec,
>> + &optlen_last, &optlen_mask,
>> + &optlen_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_hex16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> + "protocol", sizeof(spec->protocol),
>> + &spec->protocol, &last->protocol,
>> + &mask->protocol, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + vni_spec = rte_be_to_cpu_32(*vni_and_rsvd_specp) >> 8;
>> + vni_last = rte_be_to_cpu_32(*vni_and_rsvd_lastp) >> 8;
>> + vni_mask = rte_be_to_cpu_32(*vni_and_rsvd_maskp) >> 8;
>> +
>> + memset(mask->vni, 0, sizeof(mask->vni));
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex24, "vni",
>> + sizeof(vni_spec), &vni_spec,
>> + &vni_last, &vni_mask,
>> + &vni_full_mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_mark(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_mark *spec = spec_ptr;
>> + struct rte_flow_item_mark *last = last_ptr;
>> + struct rte_flow_item_mark *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint32,
>> + rte_flow_snprint_hex32, "id",
>> + sizeof(spec->id), &spec->id,
>> + &last->id, &mask->id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_item_pppoed(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
>> + struct rte_flow_item_pppoe *spec = spec_ptr;
>> + struct rte_flow_item_pppoe *last = last_ptr;
>> + struct rte_flow_item_pppoe *mask = mask_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
>> + rte_flow_snprint_uint16_be2cpu,
>> + rte_flow_snprint_hex16_be2cpu,
>> + "seid", sizeof(spec->session_id),
>> + &spec->session_id, &last-
>>> session_id,
>> + &mask->session_id, NULL);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static const struct {
>> + const char *name;
>> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_char_total,
>> + void *spec_ptr, void *last_ptr, void *mask_ptr);
>> + size_t size;
>> +} item_table[] = {
>> + [RTE_FLOW_ITEM_TYPE_VOID] = {
>> + .name = "void"
>> + },
>> + [RTE_FLOW_ITEM_TYPE_PF] = {
>> + .name = "pf"
>> + },
>> + [RTE_FLOW_ITEM_TYPE_PPPOES] = {
>> + .name = "pppoes"
>> + },
>> + [RTE_FLOW_ITEM_TYPE_PPPOED] = {
>> + .name = "pppoed",
>> + .parse_cb = rte_flow_snprint_item_pppoed,
>> + .size = sizeof(struct rte_flow_item_pppoe)
>> + },
>> +
>> +#define ITEM(_name_uppercase, _name_lowercase) \
>> + [RTE_FLOW_ITEM_TYPE_##_name_uppercase] = {
>> \
>> + .name = #_name_lowercase, \
>> + .parse_cb = rte_flow_snprint_item_##_name_lowercase,
>> \
>> + .size = sizeof(struct rte_flow_item_##_name_lowercase)
>> \
>> + }
>> +
>> + ITEM(VF, vf),
>> + ITEM(PHY_PORT, phy_port),
>> + ITEM(PORT_ID, port_id),
>> + ITEM(ETH, eth),
>> + ITEM(VLAN, vlan),
>> + ITEM(IPV4, ipv4),
>> + ITEM(IPV6, ipv6),
>> + ITEM(UDP, udp),
>> + ITEM(TCP, tcp),
>> + ITEM(VXLAN, vxlan),
>> + ITEM(NVGRE, nvgre),
>> + ITEM(GENEVE, geneve),
>> + ITEM(MARK, mark),
>> +
>> +#undef ITEM
>> +};
>> +
>> +static int
>> +rte_flow_snprint_item(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const struct rte_flow_item *item) {
>> + int rc;
>> +
>> + if (item->type < 0 || item->type >= RTE_DIM(item_table) ||
>> + item_table[item->type].name == NULL) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "{unknown}");
>> + if (rc != 0)
>> + return rc;
>> +
>> + goto out;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + item_table[item->type].name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (item_table[item->type].parse_cb != NULL) {
>> + size_t item_size = item_table[item->type].size;
>> + uint8_t spec[item_size];
>> + uint8_t last[item_size];
>> + uint8_t mask[item_size];
>> +
>> + rte_flow_item_init_parse(item, item_size, spec, last, mask);
>> +
>> + rc = item_table[item->type].parse_cb(buf, buf_size,
>> + nb_chars_total,
>> + spec, last, mask);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (!rte_flow_buf_is_all_zeros(mask, item_size)) {
>> + rc = rte_flow_snprint_str(buf, buf_size,
>> + nb_chars_total,
>> + "{unknown bits}");
>> + if (rc != 0)
>> + return rc;
>> + }
>> + }
>> +
>> +out:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_pattern(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const struct rte_flow_item pattern[]) {
>> + const struct rte_flow_item *item;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "pattern");
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (pattern == NULL)
>> + goto end;
>> +
>> + for (item = pattern;
>> + item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
>> + rc = rte_flow_snprint_item(buf, buf_size, nb_chars_total,
>> item);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> +end:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_jump(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_jump *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "group");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
>> + &conf->group);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_mark(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_mark *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
>>> id);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_queue(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void *conf_ptr) {
>> + const struct rte_flow_action_queue *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
>> + &conf->index);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_count(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void *conf_ptr) {
>> + const struct rte_flow_action_count *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "identifier");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
>>> id);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (conf->shared) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "shared");
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_rss_func(char *buf, size_t buf_size,
>> + size_t *nb_chars_total,
>> + enum rte_eth_hash_function func)
>> +{
>> + int rc;
>> +
>> + if (func == RTE_ETH_HASH_FUNCTION_DEFAULT)
>> + return 0;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "func");
>> + if (rc != 0)
>> + return rc;
>> +
>> + switch (func) {
>> + case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "toeplitz");
>> + break;
>> + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "simple_xor");
>> + break;
>> + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "symmetric_toeplitz");
>> + break;
>> + default:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "{unknown}");
>> + break;
>> + }
>> +
>> + return rc;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_rss_level(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, uint32_t level) {
>> + int rc;
>> +
>> + if (level == 0)
>> + return 0;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "level");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &level);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static const struct {
>> + const char *name;
>> + uint64_t flag;
>> +} rss_type_table[] = {
>> + { "ipv4", ETH_RSS_IPV4 },
>> + { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
>> + { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
>> + { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
>> + { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
>> + { "ipv6", ETH_RSS_IPV6 },
>> + { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
>> + { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
>> + { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
>> + { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
>> + { "ipv6-ex", ETH_RSS_IPV6_EX },
>> + { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
>> + { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
>> +
>> + { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
>> + { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
>> + { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
>> + { "l4-dst-only", ETH_RSS_L4_DST_ONLY }, };
>> +
>> +static int
>> +rte_flow_snprint_action_rss_types(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, uint64_t types) {
>> + unsigned int i;
>> + int rc;
>> +
>> + if (types == 0)
>> + return 0;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "types");
>> + if (rc != 0)
>> + return rc;
>> +
>> + for (i = 0; i < RTE_DIM(rss_type_table); ++i) {
>> + uint64_t flag = rss_type_table[i].flag;
>> +
>> + if ((types & flag) == 0)
>> + continue;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + rss_type_table[i].name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + types &= ~flag;
>> + }
>> +
>> + if (types != 0) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "{unknown}");
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_rss_queues(char *buf, size_t buf_size,
>> + size_t *nb_chars_total,
>> + const uint16_t *queues,
>> + unsigned int nb_queues)
>> +{
>> + unsigned int i;
>> + int rc;
>> +
>> + if (nb_queues == 0)
>> + return 0;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "queues");
>> + if (rc != 0)
>> + return rc;
>> +
>> + for (i = 0; i < nb_queues; ++i) {
>> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
>> + &queues[i]);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_rss(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_rss *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_action_rss_func(buf, buf_size, nb_chars_total,
>> + conf->func);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_action_rss_level(buf, buf_size,
>> nb_chars_total,
>> + conf->level);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_action_rss_types(buf, buf_size,
>> nb_chars_total,
>> + conf->types);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (conf->key_len != 0) {
>> + if (conf->key != NULL) {
>> + unsigned int i;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size,
>> nb_chars_total,
>> + "" /* results in space */);
>> + if (rc != 0)
>> + return rc;
>> +
>> + for (i = 0; i < conf->key_len; ++i) {
>> + rc = rte_flow_snprint_byte(buf, buf_size,
>> + nb_chars_total,
>> + &conf->key[i]);
>> + if (rc != 0)
>> + return rc;
>> + }
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "key_len");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
>> + &conf->key_len);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> + rc = rte_flow_snprint_action_rss_queues(buf, buf_size,
>> nb_chars_total,
>> + conf->queue, conf-
>>> queue_num);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_vf(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_vf *conf = conf_ptr;
>> + int rc;
>> +
>> + if (conf->original) {
>> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "original on");
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
>>> id);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_phy_port(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void
>> *conf_ptr) {
>> + const struct rte_flow_action_phy_port *conf = conf_ptr;
>> + int rc;
>> +
>> + if (conf->original) {
>> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "original on");
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
>> + &conf->index);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_port_id(char *buf, size_t buf_size,
>> + size_t *nb_chars_total, const void *conf_ptr)
>> {
>> + const struct rte_flow_action_port_id *conf = conf_ptr;
>> + int rc;
>> +
>> + if (conf->original) {
>> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "original on");
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
>>> id);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_of_push_vlan(char *buf, size_t buf_size,
>> + size_t *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_of_push_vlan *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> "ethertype");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_hex16_be2cpu(buf, buf_size, nb_chars_total,
>> + &conf->ethertype);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_of_set_vlan_vid(char *buf, size_t buf_size,
>> + size_t *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_of_set_vlan_vid *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_vid");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint16_be2cpu(buf, buf_size, nb_chars_total,
>> + &conf->vlan_vid);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_action_of_set_vlan_pcp(char *buf, size_t buf_size,
>> + size_t *nb_chars_total,
>> + const void *conf_ptr)
>> +{
>> + const struct rte_flow_action_of_set_vlan_pcp *conf = conf_ptr;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_pcp");
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_uint8(buf, buf_size, nb_chars_total,
>> + &conf->vlan_pcp);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static const struct {
>> + const char *name;
>> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const void *conf_ptr);
>> +} action_table[] = {
>> + [RTE_FLOW_ACTION_TYPE_VOID] = {
>> + .name = "void"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_FLAG] = {
>> + .name = "flag"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_DROP] = {
>> + .name = "drop"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_PF] = {
>> + .name = "pf"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
>> + .name = "of_pop_vlan"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
>> + .name = "vxlan_encap"
>> + },
>> + [RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
>> + .name = "vxlan_decap"
>> + },
>> +
>> +#define ACTION(_name_uppercase, _name_lowercase) \
>> + [RTE_FLOW_ACTION_TYPE_##_name_uppercase] = {
>> \
>> + .name = #_name_lowercase, \
>> + .parse_cb = rte_flow_snprint_action_##_name_lowercase,
>> \
>> + }
>> +
>> + ACTION(JUMP, jump),
>> + ACTION(MARK, mark),
>> + ACTION(QUEUE, queue),
>> + ACTION(COUNT, count),
>> + ACTION(RSS, rss),
>> + ACTION(VF, vf),
>> + ACTION(PHY_PORT, phy_port),
>> + ACTION(PORT_ID, port_id),
>> + ACTION(OF_PUSH_VLAN, of_push_vlan),
>> + ACTION(OF_SET_VLAN_VID, of_set_vlan_vid),
>> + ACTION(OF_SET_VLAN_PCP, of_set_vlan_pcp),
>> +
>> +#undef ACTION
>> +};
>> +
>> +static int
>> +rte_flow_snprint_action(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const struct rte_flow_action *action) {
>> + int rc;
>> +
>> + if (action->type < 0 || action->type >= RTE_DIM(action_table) ||
>> + action_table[action->type].name == NULL) {
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + "{unknown}");
>> + if (rc != 0)
>> + return rc;
>> +
>> + goto out;
>> + }
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
>> + action_table[action->type].name);
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (action_table[action->type].parse_cb != NULL &&
>> + action->conf != NULL) {
>> + rc = action_table[action->type].parse_cb(buf, buf_size,
>> + nb_chars_total,
>> + action->conf);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> +out:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +rte_flow_snprint_actions(char *buf, size_t buf_size, size_t
>> *nb_chars_total,
>> + const struct rte_flow_action actions[]) {
>> + const struct rte_flow_action *action;
>> + int rc;
>> +
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "actions");
>> + if (rc != 0)
>> + return rc;
>> +
>> + if (actions == NULL)
>> + goto end;
>> +
>> + for (action = actions;
>> + action->type != RTE_FLOW_ACTION_TYPE_END; ++action) {
>> + rc = rte_flow_snprint_action(buf, buf_size, nb_chars_total,
>> + action);
>> + if (rc != 0)
>> + return rc;
>> + }
>> +
>> +end:
>> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> +
>> +int
>> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
>> + const struct rte_flow_attr *attr,
>> + const struct rte_flow_item pattern[],
>> + const struct rte_flow_action actions[]) {
>> + int rc;
>> +
>> + if (buf == NULL && buf_size != 0)
>> + return -EINVAL;
>> +
>> + *nb_chars_total = 0;
>> +
>> + rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total,
>> pattern);
>> + if (rc != 0)
>> + return rc;
>> +
>> + rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total,
>> actions);
>> + if (rc != 0)
>> + return rc;
>> +
>> + return 0;
>> +}
>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
>> 44d30b05ae..a626cac944 100644
>> --- a/lib/ethdev/version.map
>> +++ b/lib/ethdev/version.map
>> @@ -249,6 +249,9 @@ EXPERIMENTAL {
>> rte_mtr_meter_policy_delete;
>> rte_mtr_meter_policy_update;
>> rte_mtr_meter_policy_validate;
>> +
>> + # added in 21.08
>> + rte_flow_snprint;
>> };
>>
>> INTERNAL {
>> --
>> 2.20.1
--
Ivan M
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-31 2:28 ` Stephen Hemminger
@ 2021-06-01 14:17 ` Ivan Malov
2021-06-01 15:10 ` Stephen Hemminger
0 siblings, 1 reply; 16+ messages in thread
From: Ivan Malov @ 2021-06-01 14:17 UTC (permalink / raw)
To: Stephen Hemminger, Ori Kam
Cc: dev, NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Ray Kinsella, Neil Horman
Hi Stephen,
I agree that the API rte_flow_snprint() itself would look better if it
provided the number of characters in its return value, like snprintf
does. However, with respect to all internal helpers, this wouldn't be
that clear and simple: one would have to update the buffer pointer and
decrease the buffer size before each internal (smaller) helper
invocation. That would make the code more cumbersome in many places.
In v2, I will at least try to make the main API return the number of
characters. Other than that, it can be discussed further.
Thank you.
On 31/05/2021 05:28, Stephen Hemminger wrote:
> On Sun, 30 May 2021 07:27:32 +0000
> Ori Kam <orika@nvidia.com> wrote:
>
>>>
>>> DPDK applications (for example, OvS) or tests which use RTE flow API need to
>>> log created or rejected flow rules to help to recognise what goes right or
>>> wrong. From this standpoint, testpmd-compliant format is nice for the
>>> purpose because it allows to copy-paste the flow rules and debug using
>>> testpmd.
>>>
>>> Recognisable pattern items:
>>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP, VXLAN,
>>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>>>
>>> Recognisable actions:
>>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
>>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
>>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>>>
>>> Recognisable RSS types (action RSS):
>>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
>>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
>>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
>>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY, L4_DST_ONLY.
>>>
>>> Unrecognised parts of the flow specification are represented by tokens
>>> "{unknown}" and "{unknown bits}". Interested parties are welcome to
>>> extend this tool to recognise more items and actions.
>>>
>>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
>>> ---
>>> lib/ethdev/meson.build | 1 +
>>> lib/ethdev/rte_flow.h | 33 +
>>> lib/ethdev/rte_flow_snprint.c | 1681
>>> +++++++++++++++++++++++++++++++++
>>> lib/ethdev/version.map | 3 +
>>> 4 files changed, 1718 insertions(+)
>>> create mode 100644 lib/ethdev/rte_flow_snprint.c
>>>
>>> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
>>> 0205c853df..97bba4fa1b 100644
>>> --- a/lib/ethdev/meson.build
>>> +++ b/lib/ethdev/meson.build
>>> @@ -8,6 +8,7 @@ sources = files(
>>> 'rte_class_eth.c',
>>> 'rte_ethdev.c',
>>> 'rte_flow.c',
>>> + 'rte_flow_snprint.c',
>>> 'rte_mtr.c',
>>> 'rte_tm.c',
>>> )
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
>>> 961a5884fe..cd5e9ef631 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
>>> struct rte_flow_item *items,
>>> uint32_t num_of_items,
>>> struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>> + *
>>> + * Dump testpmd-compliant textual representation of the flow rule.
>>> + * Invoke this with zero-size buffer to learn the string size and
>>> + * invoke this for the second time to actually dump the flow rule.
>>> + * The buffer size on the second invocation = the string size + 1.
>>> + *
>>> + * @param[out] buf
>>> + * Buffer to save the dump in, or NULL
>>> + * @param buf_size
>>> + * Buffer size, or 0
>>> + * @param[out] nb_chars_total
>>> + * Resulting string size (excluding the terminating null byte)
>>> + * @param[in] attr
>>> + * Flow rule attributes.
>>> + * @param[in] pattern
>>> + * Pattern specification (list terminated by the END pattern item).
>>> + * @param[in] actions
>>> + * Associated actions (list terminated by the END action).
>>> + *
>>> + * @return
>>> + * 0 on success, a negative errno value otherwise
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
>>> + const struct rte_flow_attr *attr,
>>> + const struct rte_flow_item pattern[],
>>> + const struct rte_flow_action actions[]);
>>> +
>
> The code would be clearer and simpler if you adopted the same return value
> as snprintf. Then lots of places could be just tail calls and the nb_chars_total
> would be unnecessary.
>
--
Ivan M
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-01 14:17 ` Ivan Malov
@ 2021-06-01 15:10 ` Stephen Hemminger
0 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2021-06-01 15:10 UTC (permalink / raw)
To: Ivan Malov
Cc: Ori Kam, dev, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, Ray Kinsella, Neil Horman
On Tue, 1 Jun 2021 17:17:24 +0300
Ivan Malov <Ivan.Malov@oktetlabs.ru> wrote:
> Hi Stephen,
>
> I agree that the API rte_flow_snprint() itself would look better if it
> provided the number of characters in its return value, like snprintf
> does. However, with respect to all internal helpers, this wouldn't be
> that clear and simple: one would have to update the buffer pointer and
> decrease the buffer size before each internal (smaller) helper
> invocation. That would make the code more cumbersome in many places.
>
> In v2, I will at least try to make the main API return the number of
> characters. Other than that, it can be discussed further.
>
> Thank you.
>
> On 31/05/2021 05:28, Stephen Hemminger wrote:
> > On Sun, 30 May 2021 07:27:32 +0000
> > Ori Kam <orika@nvidia.com> wrote:
> >
> >>>
> >>> DPDK applications (for example, OvS) or tests which use RTE flow API need to
> >>> log created or rejected flow rules to help to recognise what goes right or
> >>> wrong. From this standpoint, testpmd-compliant format is nice for the
> >>> purpose because it allows to copy-paste the flow rules and debug using
> >>> testpmd.
> >>>
> >>> Recognisable pattern items:
> >>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP, VXLAN,
> >>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >>>
> >>> Recognisable actions:
> >>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
> >>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> >>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >>>
> >>> Recognisable RSS types (action RSS):
> >>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> >>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> >>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
> >>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY, L4_DST_ONLY.
> >>>
> >>> Unrecognised parts of the flow specification are represented by tokens
> >>> "{unknown}" and "{unknown bits}". Interested parties are welcome to
> >>> extend this tool to recognise more items and actions.
> >>>
> >>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> >>> ---
> >>> lib/ethdev/meson.build | 1 +
> >>> lib/ethdev/rte_flow.h | 33 +
> >>> lib/ethdev/rte_flow_snprint.c | 1681
> >>> +++++++++++++++++++++++++++++++++
> >>> lib/ethdev/version.map | 3 +
> >>> 4 files changed, 1718 insertions(+)
> >>> create mode 100644 lib/ethdev/rte_flow_snprint.c
> >>>
> >>> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
> >>> 0205c853df..97bba4fa1b 100644
> >>> --- a/lib/ethdev/meson.build
> >>> +++ b/lib/ethdev/meson.build
> >>> @@ -8,6 +8,7 @@ sources = files(
> >>> 'rte_class_eth.c',
> >>> 'rte_ethdev.c',
> >>> 'rte_flow.c',
> >>> + 'rte_flow_snprint.c',
> >>> 'rte_mtr.c',
> >>> 'rte_tm.c',
> >>> )
> >>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> >>> 961a5884fe..cd5e9ef631 100644
> >>> --- a/lib/ethdev/rte_flow.h
> >>> +++ b/lib/ethdev/rte_flow.h
> >>> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> >>> struct rte_flow_item *items,
> >>> uint32_t num_of_items,
> >>> struct rte_flow_error *error);
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice
> >>> + *
> >>> + * Dump testpmd-compliant textual representation of the flow rule.
> >>> + * Invoke this with zero-size buffer to learn the string size and
> >>> + * invoke this for the second time to actually dump the flow rule.
> >>> + * The buffer size on the second invocation = the string size + 1.
> >>> + *
> >>> + * @param[out] buf
> >>> + * Buffer to save the dump in, or NULL
> >>> + * @param buf_size
> >>> + * Buffer size, or 0
> >>> + * @param[out] nb_chars_total
> >>> + * Resulting string size (excluding the terminating null byte)
> >>> + * @param[in] attr
> >>> + * Flow rule attributes.
> >>> + * @param[in] pattern
> >>> + * Pattern specification (list terminated by the END pattern item).
> >>> + * @param[in] actions
> >>> + * Associated actions (list terminated by the END action).
> >>> + *
> >>> + * @return
> >>> + * 0 on success, a negative errno value otherwise
> >>> + */
> >>> +__rte_experimental
> >>> +int
> >>> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> >>> + const struct rte_flow_attr *attr,
> >>> + const struct rte_flow_item pattern[],
> >>> + const struct rte_flow_action actions[]);
> >>> +
> >
> > The code would be clearer and simpler if you adopted the same return value
> > as snprintf. Then lots of places could be just tail calls and the nb_chars_total
> > would be unnecessary.
> >
>
One other thing. Code for this kind of thing grows like a weed.
It would be good to change from if/else/switch to a more table driven
approach.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-01 14:08 ` Ivan Malov
@ 2021-06-02 13:32 ` Ori Kam
2021-06-02 13:49 ` Andrew Rybchenko
0 siblings, 1 reply; 16+ messages in thread
From: Ori Kam @ 2021-06-02 13:32 UTC (permalink / raw)
To: Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Ray Kinsella, Neil Horman
Hi Ivan,
> -----Original Message-----
> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
>
> Hi Ori,
>
> Your review efforts are much appreciated. I understand your concern
> about the partial item/action coverage, but there are some points to be
> considered when addressing it:
> - It's anyway hardly possible to use the printed flow directly in
> testpmd if it contains "opaque", or "PMD-specific", items/actions in
> terms of the tunnel offload model. These items/actions have to be
> omitted when printing the flow, and their absence in the resulting
> string means that copy/pasting the flow to testpmd isn't helpful in this
> particular case.
I fully agree with you that some of the rules can't be printed. That is why.
I'm not sure having partial solution is the way to go. If OVS for example cares about
some of the item/action, maybe this log should be on their part.
> - There's action ENCAP which also can't be fully represented by the tool
> in question, simply because it has no parameters. In tespmd, one first
> has to issue "set vxlan" command to configure the encap. header, whilst
> "vxlan" token in the flow rule string just refers to the previously set
> encap. parameters. The suggested flow print helper can't reliably print
> these two components ("set vxlan" and the flow rule itself) as they
> belong to different testpmd command strings.
>
Again, I agree with you but like my above answer, do we want a partial solution
in DPDK?
> As you might see, completeness of the solution wouldn't necessarily be
> reachable, even if full item/action coverage was provided.
>
> As for the item/action coverage itself, it's rather controversial. On
> the one hand, yes, we should probably try to cover more items and
> actions in the suggested patch, to the extent allowed by our current
> priorities. But on the other hand, the existing coverage might not be
> that poor: it's fairly elaborate and at least allows to print the most
> common flow rules.
>
That is my main issue you are going to push something that is good for you
and maybe some other cases, but it can't be used by all application, even with
the most basic commands like encap.
> Yes, macros and some other cunning ways to cover more flow specifics
> might come in handy, but, at the same time, can be rather error prone.
> Sometimes it's more robust to just write the code out in full.
>
I'm always in favor of easy of extra complex but too hard is also not good.
Thanks,
Ori
> Thank you.
>
> On 30/05/2021 10:27, Ori Kam wrote:
> > Hi Ivan,
> >
> > First nice idea and thanks for the picking up the ball.
> >
> > Before a detail review,
> > The main thing I'm concerned about is that this print will be partially
> supported,
> > I know that you covered this issue by printing unknown for unsupported
> item/actions,
> > but this will mean that it is enough that one item/action is not supported
> and already the
> > flow can't be used in testpmd.
> > To get full support it means that the developer needs to add such print
> with each new
> > item/action. I agree it is possible, but it has high overhead for each feature.
> >
> > Maybe we should somehow create a macros for the prints or other easier
> to support ways.
> >
> > For example, just printing the ipv4 has 7 function calls inside of it each one
> with error checking,
> > and I'm not counting the dedicated functions.
> >
> >
> >
> > Best,
> > Ori
> >
> >
> >> -----Original Message-----
> >> From: Ivan Malov <ivan.malov@oktetlabs.ru>
> >> Sent: Thursday, May 27, 2021 11:25 AM
> >> To: dev@dpdk.org
> >> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
> Yigit
> >> <ferruh.yigit@intel.com>; Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
> >> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> >> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow
> rule
> >> dumping
> >>
> >> DPDK applications (for example, OvS) or tests which use RTE flow API
> need to
> >> log created or rejected flow rules to help to recognise what goes right or
> >> wrong. From this standpoint, testpmd-compliant format is nice for the
> >> purpose because it allows to copy-paste the flow rules and debug using
> >> testpmd.
> >>
> >> Recognisable pattern items:
> >> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
> VXLAN,
> >> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >>
> >> Recognisable actions:
> >> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
> PHY_PORT,
> >> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> >> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >>
> >> Recognisable RSS types (action RSS):
> >> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> >> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> >> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
> >> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
> L4_DST_ONLY.
> >>
> >> Unrecognised parts of the flow specification are represented by tokens
> >> "{unknown}" and "{unknown bits}". Interested parties are welcome to
> >> extend this tool to recognise more items and actions.
> >>
> >> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> >> ---
> >> lib/ethdev/meson.build | 1 +
> >> lib/ethdev/rte_flow.h | 33 +
> >> lib/ethdev/rte_flow_snprint.c | 1681
> >> +++++++++++++++++++++++++++++++++
> >> lib/ethdev/version.map | 3 +
> >> 4 files changed, 1718 insertions(+)
> >> create mode 100644 lib/ethdev/rte_flow_snprint.c
> >>
> >> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
> >> 0205c853df..97bba4fa1b 100644
> >> --- a/lib/ethdev/meson.build
> >> +++ b/lib/ethdev/meson.build
> >> @@ -8,6 +8,7 @@ sources = files(
> >> 'rte_class_eth.c',
> >> 'rte_ethdev.c',
> >> 'rte_flow.c',
> >> + 'rte_flow_snprint.c',
> >> 'rte_mtr.c',
> >> 'rte_tm.c',
> >> )
> >> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> >> 961a5884fe..cd5e9ef631 100644
> >> --- a/lib/ethdev/rte_flow.h
> >> +++ b/lib/ethdev/rte_flow.h
> >> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t
> port_id,
> >> struct rte_flow_item *items,
> >> uint32_t num_of_items,
> >> struct rte_flow_error *error);
> >> +
> >> +/**
> >> + * @warning
> >> + * @b EXPERIMENTAL: this API may change without prior notice
> >> + *
> >> + * Dump testpmd-compliant textual representation of the flow rule.
> >> + * Invoke this with zero-size buffer to learn the string size and
> >> + * invoke this for the second time to actually dump the flow rule.
> >> + * The buffer size on the second invocation = the string size + 1.
> >> + *
> >> + * @param[out] buf
> >> + * Buffer to save the dump in, or NULL
> >> + * @param buf_size
> >> + * Buffer size, or 0
> >> + * @param[out] nb_chars_total
> >> + * Resulting string size (excluding the terminating null byte)
> >> + * @param[in] attr
> >> + * Flow rule attributes.
> >> + * @param[in] pattern
> >> + * Pattern specification (list terminated by the END pattern item).
> >> + * @param[in] actions
> >> + * Associated actions (list terminated by the END action).
> >> + *
> >> + * @return
> >> + * 0 on success, a negative errno value otherwise
> >> + */
> >> +__rte_experimental
> >> +int
> >> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> + const struct rte_flow_attr *attr,
> >> + const struct rte_flow_item pattern[],
> >> + const struct rte_flow_action actions[]);
> >> +
> >> #ifdef __cplusplus
> >> }
> >> #endif
> >> diff --git a/lib/ethdev/rte_flow_snprint.c b/lib/ethdev/rte_flow_snprint.c
> >> new file mode 100644 index 0000000000..513886528b
> >> --- /dev/null
> >> +++ b/lib/ethdev/rte_flow_snprint.c
> >> @@ -0,0 +1,1681 @@
> >> +/* SPDX-License-Identifier: BSD-3-Clause
> >> + *
> >> + * Copyright(c) 2021 Xilinx, Inc.
> >> + */
> >> +
> >> +#include <stdbool.h>
> >> +#include <stdint.h>
> >> +#include <string.h>
> >> +
> >> +#include <rte_common.h>
> >> +#include "rte_ethdev.h"
> >> +#include "rte_flow.h"
> >> +
> >> +static int
> >> +rte_flow_snprint_str(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + const char *str = value_ptr;
> >> + size_t write_size_max;
> >> + int retv;
> >> +
> >> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> >> + retv = snprintf(buf + *nb_chars_total, write_size_max, " %s", str);
> >> + if (retv < 0)
> >> + return -EFAULT;
> >> +
> >> + *nb_chars_total += retv;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ether_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + const struct rte_ether_addr *ea = value_ptr;
> >> + const uint8_t *ab = ea->addr_bytes;
> >> + size_t write_size_max;
> >> + int retv;
> >> +
> >> + write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> >> + retv = snprintf(buf + *nb_chars_total, write_size_max,
> >> + " %02x:%02x:%02x:%02x:%02x:%02x",
> >> + ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
> >> + if (retv < 0)
> >> + return -EFAULT;
> >> +
> >> + *nb_chars_total += retv;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ipv4_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + char addr_str[INET_ADDRSTRLEN];
> >> +
> >> + if (inet_ntop(AF_INET, value_ptr, addr_str, sizeof(addr_str)) ==
> >> NULL)
> >> + return -EFAULT;
> >> +
> >> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ipv6_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + char addr_str[INET6_ADDRSTRLEN];
> >> +
> >> + if (inet_ntop(AF_INET6, value_ptr, addr_str, sizeof(addr_str)) ==
> >> NULL)
> >> + return -EFAULT;
> >> +
> >> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> >> +}
> >> +
> >> +#define SNPRINT(_type, _fmt) \
> >> + do { \
> >> + const _type *vp = value_ptr; \
> >> + size_t write_size_max; \
> >> + int retv; \
> >> + \
> >> + write_size_max = buf_size - \
> >> + RTE_MIN(buf_size, *nb_chars_total); \
> >> + retv = snprintf(buf + *nb_chars_total, write_size_max,
> >> \
> >> + _fmt, *vp); \
> >> + if (retv < 0) \
> >> + return -EFAULT;
> >> \
> >> + \
> >> + *nb_chars_total += retv; \
> >> + \
> >> + return 0; \
> >> + } while (0)
> >> +
> >> +static int
> >> +rte_flow_snprint_uint32(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint32_t, " %u");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint32_t, " 0x%08x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex24(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint32_t, " 0x%06x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex20(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint32_t, " 0x%05x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint16(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint16_t, " %hu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint16_be2cpu(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void *value_ptr) {
> >> + const uint16_t *valuep = value_ptr;
> >> + uint16_t value = rte_be_to_cpu_16(*valuep);
> >> +
> >> + value_ptr = &value;
> >> +
> >> + SNPRINT(uint16_t, " %hu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex16_be2cpu(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void *value_ptr) {
> >> + const uint16_t *valuep = value_ptr;
> >> + uint16_t value = rte_be_to_cpu_16(*valuep);
> >> +
> >> + value_ptr = &value;
> >> +
> >> + SNPRINT(uint16_t, " 0x%04x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint8(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint8_t, " %hhu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex8(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint8_t, " 0x%02x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_byte(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const void *value_ptr)
> >> +{
> >> + SNPRINT(uint8_t, "%02x");
> >> +}
> >> +
> >> +#undef SNPRINT
> >> +
> >> +static int
> >> +rte_flow_snprint_attr(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const struct rte_flow_attr *attr) {
> >> + int rc;
> >> +
> >> + if (attr == NULL)
> >> + return 0;
> >> +
> >> + if (attr->group != 0) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "group");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> + &attr->group);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + if (attr->priority != 0) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "priority");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> + &attr->priority);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + if (attr->transfer) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "transfer");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + if (attr->ingress) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "ingress");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + if (attr->egress) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "egress");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static void
> >> +rte_flow_item_init_parse(const struct rte_flow_item *item, size_t
> >> item_size,
> >> + void *spec, void *last, void *mask) {
> >> + if (item->spec != NULL)
> >> + memcpy(spec, item->spec, item_size);
> >> + else
> >> + memset(spec, 0, item_size);
> >> +
> >> + if (item->last != NULL)
> >> + memcpy(last, item->last, item_size);
> >> + else
> >> + memset(last, 0, item_size);
> >> +
> >> + if (item->mask != NULL)
> >> + memcpy(mask, item->mask, item_size);
> >> + else
> >> + memset(mask, 0, item_size);
> >> +}
> >> +
> >> +static bool
> >> +rte_flow_buf_is_all_zeros(const void *buf_ptr, size_t buf_size) {
> >> + const uint8_t *buf = buf_ptr;
> >> + unsigned int i;
> >> + uint8_t t = 0;
> >> +
> >> + for (i = 0; i < buf_size; ++i)
> >> + t |= buf[i];
> >> +
> >> + return (t == 0);
> >> +}
> >> +
> >> +static bool
> >> +rte_flow_buf_is_all_ones(const void *buf_ptr, size_t buf_size) {
> >> + const uint8_t *buf = buf_ptr;
> >> + unsigned int i;
> >> + uint8_t t = ~0;
> >> +
> >> + for (i = 0; i < buf_size; ++i)
> >> + t &= buf[i];
> >> +
> >> + return (t == (uint8_t)(~0));
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_field(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + int (*value_dump_cb)(char *, size_t, size_t *,
> >> + const void *),
> >> + int (*mask_dump_cb)(char *, size_t, size_t *,
> >> + const void *),
> >> + const char *field_name, size_t field_size,
> >> + void *field_spec, void *field_last,
> >> + void *field_mask, void *field_full_mask) {
> >> + bool mask_is_all_ones;
> >> + bool last_is_futile;
> >> + int rc;
> >> +
> >> + if (rte_flow_buf_is_all_zeros(field_mask, field_size))
> >> + return 0;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> field_name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (field_full_mask != NULL) {
> >> + mask_is_all_ones = (memcmp(field_mask, field_full_mask,
> >> + field_size) == 0);
> >> + } else {
> >> + mask_is_all_ones = rte_flow_buf_is_all_ones(field_mask,
> >> + field_size);
> >> + }
> >> + last_is_futile = rte_flow_buf_is_all_zeros(field_last, field_size) ||
> >> + (memcmp(field_spec, field_last, field_size) == 0);
> >> +
> >> + if (mask_is_all_ones && last_is_futile) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "is");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
> >> field_spec);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + goto done;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "spec");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (!last_is_futile) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + field_name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "last");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = value_dump_cb(buf, buf_size, nb_chars_total,
> >> field_last);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> field_name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "mask");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = mask_dump_cb(buf, buf_size, nb_chars_total, field_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> +done:
> >> + /*
> >> + * Zeroise the printed field. When all item fields have been printed,
> >> + * the corresponding item handler will make sure that the whole item
> >> + * mask is all-zeros. This is needed to highlight unsupported fields.
> >> + *
> >> + * If the provided field mask pointer refers to a separate container
> >> + * rather than to the field in the item mask directly, it's the duty
> >> + * of the item handler to clear the field in the item mask correctly.
> >> + */
> >> + memset(field_mask, 0, field_size);
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vf(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_vf *spec = spec_ptr;
> >> + struct rte_flow_item_vf *last = last_ptr;
> >> + struct rte_flow_item_vf *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex32, "id",
> >> + sizeof(spec->id), &spec->id, &last-
> >>> id,
> >> + &mask->id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_phy_port(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, void *spec_ptr,
> >> + void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_phy_port *spec = spec_ptr;
> >> + struct rte_flow_item_phy_port *last = last_ptr;
> >> + struct rte_flow_item_phy_port *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex32, "index",
> >> + sizeof(spec->index), &spec->index,
> >> + &last->index, &mask->index, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_port_id(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, void *spec_ptr,
> >> + void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_port_id *spec = spec_ptr;
> >> + struct rte_flow_item_port_id *last = last_ptr;
> >> + struct rte_flow_item_port_id *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex32, "id",
> >> + sizeof(spec->id), &spec->id, &last-
> >>> id,
> >> + &mask->id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_eth(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_eth *spec = spec_ptr;
> >> + struct rte_flow_item_eth *last = last_ptr;
> >> + struct rte_flow_item_eth *mask = mask_ptr;
> >> + uint8_t has_vlan_full_mask = 1;
> >> + uint8_t has_vlan_spec;
> >> + uint8_t has_vlan_last;
> >> + uint8_t has_vlan_mask;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ether_addr,
> >> + rte_flow_snprint_ether_addr, "dst",
> >> + sizeof(spec->hdr.d_addr),
> >> + &spec->hdr.d_addr, &last-
> >>> hdr.d_addr,
> >> + &mask->hdr.d_addr, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ether_addr,
> >> + rte_flow_snprint_ether_addr, "src",
> >> + sizeof(spec->hdr.s_addr),
> >> + &spec->hdr.s_addr, &last-
> >>> hdr.s_addr,
> >> + &mask->hdr.s_addr, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "type",
> >> + sizeof(spec->hdr.ether_type),
> >> + &spec->hdr.ether_type,
> >> + &last->hdr.ether_type,
> >> + &mask->hdr.ether_type, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + has_vlan_spec = spec->has_vlan;
> >> + has_vlan_last = last->has_vlan;
> >> + has_vlan_mask = mask->has_vlan;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_uint8, "has_vlan",
> >> + sizeof(has_vlan_spec),
> >> &has_vlan_spec,
> >> + &has_vlan_last, &has_vlan_mask,
> >> + &has_vlan_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + mask->has_vlan = 0;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vlan(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_vlan *spec = spec_ptr;
> >> + struct rte_flow_item_vlan *last = last_ptr;
> >> + struct rte_flow_item_vlan *mask = mask_ptr;
> >> + uint8_t has_more_vlan_full_mask = 1;
> >> + uint8_t has_more_vlan_spec;
> >> + uint8_t has_more_vlan_last;
> >> + uint8_t has_more_vlan_mask;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "tci",
> >> + sizeof(spec->hdr.vlan_tci),
> >> + &spec->hdr.vlan_tci,
> >> + &last->hdr.vlan_tci,
> >> + &mask->hdr.vlan_tci, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + "inner_type",
> >> + sizeof(spec->hdr.eth_proto),
> >> + &spec->hdr.eth_proto,
> >> + &last->hdr.eth_proto,
> >> + &mask->hdr.eth_proto, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + has_more_vlan_spec = spec->has_more_vlan;
> >> + has_more_vlan_last = last->has_more_vlan;
> >> + has_more_vlan_mask = mask->has_more_vlan;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_uint8,
> >> + "has_more_vlan",
> >> + sizeof(has_more_vlan_spec),
> >> + &has_more_vlan_spec,
> >> + &has_more_vlan_last,
> >> + &has_more_vlan_mask,
> >> + &has_more_vlan_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + mask->has_more_vlan = 0;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_ipv4(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_ipv4 *spec = spec_ptr;
> >> + struct rte_flow_item_ipv4 *last = last_ptr;
> >> + struct rte_flow_item_ipv4 *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex8,
> >> + rte_flow_snprint_hex8, "tos",
> >> + sizeof(spec->hdr.type_of_service),
> >> + &spec->hdr.type_of_service,
> >> + &last->hdr.type_of_service,
> >> + &mask->hdr.type_of_service,
> >> NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + "packet_id",
> >> + sizeof(spec->hdr.packet_id),
> >> + &spec->hdr.packet_id,
> >> + &last->hdr.packet_id,
> >> + &mask->hdr.packet_id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + "fragment_offset",
> >> + sizeof(spec->hdr.fragment_offset),
> >> + &spec->hdr.fragment_offset,
> >> + &last->hdr.fragment_offset,
> >> + &mask->hdr.fragment_offset,
> >> NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_hex8, "ttl",
> >> + sizeof(spec->hdr.time_to_live),
> >> + &spec->hdr.time_to_live,
> >> + &last->hdr.time_to_live,
> >> + &mask->hdr.time_to_live, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_hex8, "proto",
> >> + sizeof(spec->hdr.next_proto_id),
> >> + &spec->hdr.next_proto_id,
> >> + &last->hdr.next_proto_id,
> >> + &mask->hdr.next_proto_id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ipv4_addr,
> >> + rte_flow_snprint_ipv4_addr, "src",
> >> + sizeof(spec->hdr.src_addr),
> >> + &spec->hdr.src_addr,
> >> + &last->hdr.src_addr,
> >> + &mask->hdr.src_addr, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ipv4_addr,
> >> + rte_flow_snprint_ipv4_addr, "dst",
> >> + sizeof(spec->hdr.dst_addr),
> >> + &spec->hdr.dst_addr,
> >> + &last->hdr.dst_addr,
> >> + &mask->hdr.dst_addr, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_ipv6(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + uint32_t tc_full_mask = (RTE_IPV6_HDR_TC_MASK >>
> >> RTE_IPV6_HDR_TC_SHIFT);
> >> + uint32_t fl_full_mask = (RTE_IPV6_HDR_FL_MASK >>
> >> RTE_IPV6_HDR_FL_SHIFT);
> >> + struct rte_flow_item_ipv6 *spec = spec_ptr;
> >> + struct rte_flow_item_ipv6 *last = last_ptr;
> >> + struct rte_flow_item_ipv6 *mask = mask_ptr;
> >> + uint8_t has_frag_ext_full_mask = 1;
> >> + uint8_t has_frag_ext_spec;
> >> + uint8_t has_frag_ext_last;
> >> + uint8_t has_frag_ext_mask;
> >> + uint32_t vtc_flow;
> >> + uint32_t fl_spec;
> >> + uint32_t fl_last;
> >> + uint32_t fl_mask;
> >> + uint32_t tc_spec;
> >> + uint32_t tc_last;
> >> + uint32_t tc_mask;
> >> + int rc;
> >> +
> >> + vtc_flow = rte_be_to_cpu_32(spec->hdr.vtc_flow);
> >> + tc_spec = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> + fl_spec = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> + vtc_flow = rte_be_to_cpu_32(last->hdr.vtc_flow);
> >> + tc_last = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> + fl_last = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> + vtc_flow = rte_be_to_cpu_32(mask->hdr.vtc_flow);
> >> + tc_mask = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> + fl_mask = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> + mask->hdr.vtc_flow &=
> >> ~rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK |
> >> + RTE_IPV6_HDR_FL_MASK);
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex8,
> >> + rte_flow_snprint_hex8, "tc",
> >> + sizeof(tc_spec), &tc_spec, &tc_last,
> >> + &tc_mask, &tc_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex20, "flow",
> >> + sizeof(fl_spec), &fl_spec, &fl_last,
> >> + &fl_mask, &fl_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_hex8, "proto",
> >> + sizeof(spec->hdr.proto),
> >> + &spec->hdr.proto,
> >> + &last->hdr.proto,
> >> + &mask->hdr.proto, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_hex8, "hop",
> >> + sizeof(spec->hdr.hop_limits),
> >> + &spec->hdr.hop_limits,
> >> + &last->hdr.hop_limits,
> >> + &mask->hdr.hop_limits, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ipv6_addr,
> >> + rte_flow_snprint_ipv6_addr, "src",
> >> + sizeof(spec->hdr.src_addr),
> >> + &spec->hdr.src_addr,
> >> + &last->hdr.src_addr,
> >> + &mask->hdr.src_addr, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_ipv6_addr,
> >> + rte_flow_snprint_ipv6_addr, "dst",
> >> + sizeof(spec->hdr.dst_addr),
> >> + &spec->hdr.dst_addr,
> >> + &last->hdr.dst_addr,
> >> + &mask->hdr.dst_addr, NULL);
> >> +
> >> + has_frag_ext_spec = spec->has_frag_ext;
> >> + has_frag_ext_last = last->has_frag_ext;
> >> + has_frag_ext_mask = mask->has_frag_ext;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint8,
> >> + rte_flow_snprint_uint8,
> >> "has_frag_ext",
> >> + sizeof(has_frag_ext_spec),
> >> + &has_frag_ext_spec,
> >> &has_frag_ext_last,
> >> + &has_frag_ext_mask,
> >> + &has_frag_ext_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + mask->has_frag_ext = 0;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_udp(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_udp *spec = spec_ptr;
> >> + struct rte_flow_item_udp *last = last_ptr;
> >> + struct rte_flow_item_udp *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "src",
> >> + sizeof(spec->hdr.src_port),
> >> + &spec->hdr.src_port,
> >> + &last->hdr.src_port,
> >> + &mask->hdr.src_port, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "dst",
> >> + sizeof(spec->hdr.dst_port),
> >> + &spec->hdr.dst_port,
> >> + &last->hdr.dst_port,
> >> + &mask->hdr.dst_port, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_tcp(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_tcp *spec = spec_ptr;
> >> + struct rte_flow_item_tcp *last = last_ptr;
> >> + struct rte_flow_item_tcp *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "src",
> >> + sizeof(spec->hdr.src_port),
> >> + &spec->hdr.src_port,
> >> + &last->hdr.src_port,
> >> + &mask->hdr.src_port, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> "dst",
> >> + sizeof(spec->hdr.dst_port),
> >> + &spec->hdr.dst_port,
> >> + &last->hdr.dst_port,
> >> + &mask->hdr.dst_port, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex8,
> >> + rte_flow_snprint_hex8, "flags",
> >> + sizeof(spec->hdr.tcp_flags),
> >> + &spec->hdr.tcp_flags,
> >> + &last->hdr.tcp_flags,
> >> + &mask->hdr.tcp_flags, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vxlan(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_vxlan *spec = spec_ptr;
> >> + struct rte_flow_item_vxlan *last = last_ptr;
> >> + struct rte_flow_item_vxlan *mask = mask_ptr;
> >> + uint32_t vni_full_mask = 0xffffff;
> >> + uint32_t vni_spec;
> >> + uint32_t vni_last;
> >> + uint32_t vni_mask;
> >> + int rc;
> >> +
> >> + vni_spec = rte_be_to_cpu_32(spec->hdr.vx_vni) >> 8;
> >> + vni_last = rte_be_to_cpu_32(last->hdr.vx_vni) >> 8;
> >> + vni_mask = rte_be_to_cpu_32(mask->hdr.vx_vni) >> 8;
> >> +
> >> + mask->hdr.vx_vni &= ~RTE_BE32(0xffffff00);
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex24, "vni",
> >> + sizeof(vni_spec), &vni_spec,
> >> + &vni_last, &vni_mask,
> >> + &vni_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_nvgre(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_nvgre *spec = spec_ptr;
> >> + struct rte_flow_item_nvgre *last = last_ptr;
> >> + struct rte_flow_item_nvgre *mask = mask_ptr;
> >> + uint32_t *tni_and_flow_id_specp = (uint32_t *)spec->tni;
> >> + uint32_t *tni_and_flow_id_lastp = (uint32_t *)last->tni;
> >> + uint32_t *tni_and_flow_id_maskp = (uint32_t *)mask->tni;
> >> + uint32_t tni_full_mask = 0xffffff;
> >> + uint32_t tni_spec;
> >> + uint32_t tni_last;
> >> + uint32_t tni_mask;
> >> + int rc;
> >> +
> >> + tni_spec = rte_be_to_cpu_32(*tni_and_flow_id_specp) >> 8;
> >> + tni_last = rte_be_to_cpu_32(*tni_and_flow_id_lastp) >> 8;
> >> + tni_mask = rte_be_to_cpu_32(*tni_and_flow_id_maskp) >> 8;
> >> +
> >> + memset(mask->tni, 0, sizeof(mask->tni));
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex24, "tni",
> >> + sizeof(tni_spec), &tni_spec,
> >> + &tni_last, &tni_mask,
> >> + &tni_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_geneve(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_geneve *spec = spec_ptr;
> >> + struct rte_flow_item_geneve *last = last_ptr;
> >> + struct rte_flow_item_geneve *mask = mask_ptr;
> >> + uint32_t *vni_and_rsvd_specp = (uint32_t *)spec->vni;
> >> + uint32_t *vni_and_rsvd_lastp = (uint32_t *)last->vni;
> >> + uint32_t *vni_and_rsvd_maskp = (uint32_t *)mask->vni;
> >> + uint32_t vni_full_mask = 0xffffff;
> >> + uint16_t optlen_full_mask = 0x3f;
> >> + uint16_t optlen_spec;
> >> + uint16_t optlen_last;
> >> + uint16_t optlen_mask;
> >> + uint32_t vni_spec;
> >> + uint32_t vni_last;
> >> + uint32_t vni_mask;
> >> + int rc;
> >> +
> >> + optlen_spec = rte_be_to_cpu_16(spec->ver_opt_len_o_c_rsvd0) &
> >> 0x3f00;
> >> + optlen_spec >>= 8;
> >> +
> >> + optlen_last = rte_be_to_cpu_16(last->ver_opt_len_o_c_rsvd0) &
> >> 0x3f00;
> >> + optlen_last >>= 8;
> >> +
> >> + optlen_mask = rte_be_to_cpu_16(mask->ver_opt_len_o_c_rsvd0)
> >> & 0x3f00;
> >> + optlen_mask >>= 8;
> >> +
> >> + mask->ver_opt_len_o_c_rsvd0 &= ~RTE_BE16(0x3f00);
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16,
> >> + rte_flow_snprint_hex8, "optlen",
> >> + sizeof(optlen_spec), &optlen_spec,
> >> + &optlen_last, &optlen_mask,
> >> + &optlen_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + "protocol", sizeof(spec->protocol),
> >> + &spec->protocol, &last->protocol,
> >> + &mask->protocol, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + vni_spec = rte_be_to_cpu_32(*vni_and_rsvd_specp) >> 8;
> >> + vni_last = rte_be_to_cpu_32(*vni_and_rsvd_lastp) >> 8;
> >> + vni_mask = rte_be_to_cpu_32(*vni_and_rsvd_maskp) >> 8;
> >> +
> >> + memset(mask->vni, 0, sizeof(mask->vni));
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex24, "vni",
> >> + sizeof(vni_spec), &vni_spec,
> >> + &vni_last, &vni_mask,
> >> + &vni_full_mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_mark(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_mark *spec = spec_ptr;
> >> + struct rte_flow_item_mark *last = last_ptr;
> >> + struct rte_flow_item_mark *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint32,
> >> + rte_flow_snprint_hex32, "id",
> >> + sizeof(spec->id), &spec->id,
> >> + &last->id, &mask->id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_pppoed(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> + struct rte_flow_item_pppoe *spec = spec_ptr;
> >> + struct rte_flow_item_pppoe *last = last_ptr;
> >> + struct rte_flow_item_pppoe *mask = mask_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> + rte_flow_snprint_uint16_be2cpu,
> >> + rte_flow_snprint_hex16_be2cpu,
> >> + "seid", sizeof(spec->session_id),
> >> + &spec->session_id, &last-
> >>> session_id,
> >> + &mask->session_id, NULL);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static const struct {
> >> + const char *name;
> >> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_char_total,
> >> + void *spec_ptr, void *last_ptr, void *mask_ptr);
> >> + size_t size;
> >> +} item_table[] = {
> >> + [RTE_FLOW_ITEM_TYPE_VOID] = {
> >> + .name = "void"
> >> + },
> >> + [RTE_FLOW_ITEM_TYPE_PF] = {
> >> + .name = "pf"
> >> + },
> >> + [RTE_FLOW_ITEM_TYPE_PPPOES] = {
> >> + .name = "pppoes"
> >> + },
> >> + [RTE_FLOW_ITEM_TYPE_PPPOED] = {
> >> + .name = "pppoed",
> >> + .parse_cb = rte_flow_snprint_item_pppoed,
> >> + .size = sizeof(struct rte_flow_item_pppoe)
> >> + },
> >> +
> >> +#define ITEM(_name_uppercase, _name_lowercase) \
> >> + [RTE_FLOW_ITEM_TYPE_##_name_uppercase] = {
> >> \
> >> + .name = #_name_lowercase, \
> >> + .parse_cb = rte_flow_snprint_item_##_name_lowercase,
> >> \
> >> + .size = sizeof(struct rte_flow_item_##_name_lowercase)
> >> \
> >> + }
> >> +
> >> + ITEM(VF, vf),
> >> + ITEM(PHY_PORT, phy_port),
> >> + ITEM(PORT_ID, port_id),
> >> + ITEM(ETH, eth),
> >> + ITEM(VLAN, vlan),
> >> + ITEM(IPV4, ipv4),
> >> + ITEM(IPV6, ipv6),
> >> + ITEM(UDP, udp),
> >> + ITEM(TCP, tcp),
> >> + ITEM(VXLAN, vxlan),
> >> + ITEM(NVGRE, nvgre),
> >> + ITEM(GENEVE, geneve),
> >> + ITEM(MARK, mark),
> >> +
> >> +#undef ITEM
> >> +};
> >> +
> >> +static int
> >> +rte_flow_snprint_item(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const struct rte_flow_item *item) {
> >> + int rc;
> >> +
> >> + if (item->type < 0 || item->type >= RTE_DIM(item_table) ||
> >> + item_table[item->type].name == NULL) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "{unknown}");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + goto out;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + item_table[item->type].name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (item_table[item->type].parse_cb != NULL) {
> >> + size_t item_size = item_table[item->type].size;
> >> + uint8_t spec[item_size];
> >> + uint8_t last[item_size];
> >> + uint8_t mask[item_size];
> >> +
> >> + rte_flow_item_init_parse(item, item_size, spec, last, mask);
> >> +
> >> + rc = item_table[item->type].parse_cb(buf, buf_size,
> >> + nb_chars_total,
> >> + spec, last, mask);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (!rte_flow_buf_is_all_zeros(mask, item_size)) {
> >> + rc = rte_flow_snprint_str(buf, buf_size,
> >> + nb_chars_total,
> >> + "{unknown bits}");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> + }
> >> +
> >> +out:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_pattern(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const struct rte_flow_item pattern[]) {
> >> + const struct rte_flow_item *item;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "pattern");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (pattern == NULL)
> >> + goto end;
> >> +
> >> + for (item = pattern;
> >> + item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
> >> + rc = rte_flow_snprint_item(buf, buf_size, nb_chars_total,
> >> item);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> +end:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_jump(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_jump *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "group");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> + &conf->group);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_mark(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_mark *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_queue(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void *conf_ptr) {
> >> + const struct rte_flow_action_queue *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> >> + &conf->index);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_count(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void *conf_ptr) {
> >> + const struct rte_flow_action_count *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "identifier");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (conf->shared) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "shared");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_func(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total,
> >> + enum rte_eth_hash_function func)
> >> +{
> >> + int rc;
> >> +
> >> + if (func == RTE_ETH_HASH_FUNCTION_DEFAULT)
> >> + return 0;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "func");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + switch (func) {
> >> + case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "toeplitz");
> >> + break;
> >> + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "simple_xor");
> >> + break;
> >> + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "symmetric_toeplitz");
> >> + break;
> >> + default:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "{unknown}");
> >> + break;
> >> + }
> >> +
> >> + return rc;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_level(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, uint32_t level) {
> >> + int rc;
> >> +
> >> + if (level == 0)
> >> + return 0;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "level");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &level);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static const struct {
> >> + const char *name;
> >> + uint64_t flag;
> >> +} rss_type_table[] = {
> >> + { "ipv4", ETH_RSS_IPV4 },
> >> + { "ipv4-frag", ETH_RSS_FRAG_IPV4 },
> >> + { "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
> >> + { "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
> >> + { "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
> >> + { "ipv6", ETH_RSS_IPV6 },
> >> + { "ipv6-frag", ETH_RSS_FRAG_IPV6 },
> >> + { "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
> >> + { "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
> >> + { "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
> >> + { "ipv6-ex", ETH_RSS_IPV6_EX },
> >> + { "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
> >> + { "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
> >> +
> >> + { "l3-src-only", ETH_RSS_L3_SRC_ONLY },
> >> + { "l3-dst-only", ETH_RSS_L3_DST_ONLY },
> >> + { "l4-src-only", ETH_RSS_L4_SRC_ONLY },
> >> + { "l4-dst-only", ETH_RSS_L4_DST_ONLY }, };
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_types(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, uint64_t types) {
> >> + unsigned int i;
> >> + int rc;
> >> +
> >> + if (types == 0)
> >> + return 0;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "types");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + for (i = 0; i < RTE_DIM(rss_type_table); ++i) {
> >> + uint64_t flag = rss_type_table[i].flag;
> >> +
> >> + if ((types & flag) == 0)
> >> + continue;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + rss_type_table[i].name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + types &= ~flag;
> >> + }
> >> +
> >> + if (types != 0) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "{unknown}");
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_queues(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total,
> >> + const uint16_t *queues,
> >> + unsigned int nb_queues)
> >> +{
> >> + unsigned int i;
> >> + int rc;
> >> +
> >> + if (nb_queues == 0)
> >> + return 0;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "queues");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + for (i = 0; i < nb_queues; ++i) {
> >> + rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> >> + &queues[i]);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_rss *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_action_rss_func(buf, buf_size, nb_chars_total,
> >> + conf->func);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_action_rss_level(buf, buf_size,
> >> nb_chars_total,
> >> + conf->level);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_action_rss_types(buf, buf_size,
> >> nb_chars_total,
> >> + conf->types);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (conf->key_len != 0) {
> >> + if (conf->key != NULL) {
> >> + unsigned int i;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size,
> >> nb_chars_total,
> >> + "" /* results in space */);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + for (i = 0; i < conf->key_len; ++i) {
> >> + rc = rte_flow_snprint_byte(buf, buf_size,
> >> + nb_chars_total,
> >> + &conf->key[i]);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "key_len");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> + &conf->key_len);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_action_rss_queues(buf, buf_size,
> >> nb_chars_total,
> >> + conf->queue, conf-
> >>> queue_num);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_vf(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_vf *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + if (conf->original) {
> >> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "original on");
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_phy_port(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void
> >> *conf_ptr) {
> >> + const struct rte_flow_action_phy_port *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + if (conf->original) {
> >> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "original on");
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> + &conf->index);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_port_id(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total, const void *conf_ptr)
> >> {
> >> + const struct rte_flow_action_port_id *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + if (conf->original) {
> >> + return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "original on");
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_push_vlan(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_of_push_vlan *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> "ethertype");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_hex16_be2cpu(buf, buf_size, nb_chars_total,
> >> + &conf->ethertype);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_set_vlan_vid(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_of_set_vlan_vid *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_vid");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint16_be2cpu(buf, buf_size, nb_chars_total,
> >> + &conf->vlan_vid);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_set_vlan_pcp(char *buf, size_t buf_size,
> >> + size_t *nb_chars_total,
> >> + const void *conf_ptr)
> >> +{
> >> + const struct rte_flow_action_of_set_vlan_pcp *conf = conf_ptr;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_pcp");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_uint8(buf, buf_size, nb_chars_total,
> >> + &conf->vlan_pcp);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static const struct {
> >> + const char *name;
> >> + int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> + const void *conf_ptr);
> >> +} action_table[] = {
> >> + [RTE_FLOW_ACTION_TYPE_VOID] = {
> >> + .name = "void"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_FLAG] = {
> >> + .name = "flag"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_DROP] = {
> >> + .name = "drop"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_PF] = {
> >> + .name = "pf"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
> >> + .name = "of_pop_vlan"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
> >> + .name = "vxlan_encap"
> >> + },
> >> + [RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
> >> + .name = "vxlan_decap"
> >> + },
> >> +
> >> +#define ACTION(_name_uppercase, _name_lowercase) \
> >> + [RTE_FLOW_ACTION_TYPE_##_name_uppercase] = {
> >> \
> >> + .name = #_name_lowercase, \
> >> + .parse_cb = rte_flow_snprint_action_##_name_lowercase,
> >> \
> >> + }
> >> +
> >> + ACTION(JUMP, jump),
> >> + ACTION(MARK, mark),
> >> + ACTION(QUEUE, queue),
> >> + ACTION(COUNT, count),
> >> + ACTION(RSS, rss),
> >> + ACTION(VF, vf),
> >> + ACTION(PHY_PORT, phy_port),
> >> + ACTION(PORT_ID, port_id),
> >> + ACTION(OF_PUSH_VLAN, of_push_vlan),
> >> + ACTION(OF_SET_VLAN_VID, of_set_vlan_vid),
> >> + ACTION(OF_SET_VLAN_PCP, of_set_vlan_pcp),
> >> +
> >> +#undef ACTION
> >> +};
> >> +
> >> +static int
> >> +rte_flow_snprint_action(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> + const struct rte_flow_action *action) {
> >> + int rc;
> >> +
> >> + if (action->type < 0 || action->type >= RTE_DIM(action_table) ||
> >> + action_table[action->type].name == NULL) {
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + "{unknown}");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + goto out;
> >> + }
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> + action_table[action->type].name);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (action_table[action->type].parse_cb != NULL &&
> >> + action->conf != NULL) {
> >> + rc = action_table[action->type].parse_cb(buf, buf_size,
> >> + nb_chars_total,
> >> + action->conf);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> +out:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_actions(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> + const struct rte_flow_action actions[]) {
> >> + const struct rte_flow_action *action;
> >> + int rc;
> >> +
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "actions");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + if (actions == NULL)
> >> + goto end;
> >> +
> >> + for (action = actions;
> >> + action->type != RTE_FLOW_ACTION_TYPE_END; ++action) {
> >> + rc = rte_flow_snprint_action(buf, buf_size, nb_chars_total,
> >> + action);
> >> + if (rc != 0)
> >> + return rc;
> >> + }
> >> +
> >> +end:
> >> + rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +int
> >> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> + const struct rte_flow_attr *attr,
> >> + const struct rte_flow_item pattern[],
> >> + const struct rte_flow_action actions[]) {
> >> + int rc;
> >> +
> >> + if (buf == NULL && buf_size != 0)
> >> + return -EINVAL;
> >> +
> >> + *nb_chars_total = 0;
> >> +
> >> + rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total,
> >> pattern);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total,
> >> actions);
> >> + if (rc != 0)
> >> + return rc;
> >> +
> >> + return 0;
> >> +}
> >> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> >> 44d30b05ae..a626cac944 100644
> >> --- a/lib/ethdev/version.map
> >> +++ b/lib/ethdev/version.map
> >> @@ -249,6 +249,9 @@ EXPERIMENTAL {
> >> rte_mtr_meter_policy_delete;
> >> rte_mtr_meter_policy_update;
> >> rte_mtr_meter_policy_validate;
> >> +
> >> + # added in 21.08
> >> + rte_flow_snprint;
> >> };
> >>
> >> INTERNAL {
> >> --
> >> 2.20.1
>
> --
> Ivan M
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-02 13:32 ` Ori Kam
@ 2021-06-02 13:49 ` Andrew Rybchenko
2021-06-03 8:25 ` Ori Kam
0 siblings, 1 reply; 16+ messages in thread
From: Andrew Rybchenko @ 2021-06-02 13:49 UTC (permalink / raw)
To: Ori Kam, Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Ray Kinsella, Neil Horman
Hi Ori,
On 6/2/21 4:32 PM, Ori Kam wrote:
> Hi Ivan,
>
>> -----Original Message-----
>> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
>>
>> Hi Ori,
>>
>> Your review efforts are much appreciated. I understand your concern
>> about the partial item/action coverage, but there are some points to be
>> considered when addressing it:
>> - It's anyway hardly possible to use the printed flow directly in
>> testpmd if it contains "opaque", or "PMD-specific", items/actions in
>> terms of the tunnel offload model. These items/actions have to be
>> omitted when printing the flow, and their absence in the resulting
>> string means that copy/pasting the flow to testpmd isn't helpful in this
>> particular case.
> I fully agree with you that some of the rules can't be printed. That is why.
> I'm not sure having partial solution is the way to go.
Sorry, I disagree that possibility to cover 99% and
impossibility to cover just 1% is the reason to discard.
> If OVS for example cares about
> some of the item/action, maybe this log should be on their part.
OvS does and as far as I can see already has bugs there.
Of course, nobody says that it is testpmd-complient format,
but it definitely looks so.
Anyway, it sounds strange do duplicate the functionality in
many-many DPDK apps. Of course, it removes the headache
from DPDK maintainers, but it is hardly friendly to DPDK
community in general.
>> - There's action ENCAP which also can't be fully represented by the tool
>> in question, simply because it has no parameters. In tespmd, one first
>> has to issue "set vxlan" command to configure the encap. header, whilst
>> "vxlan" token in the flow rule string just refers to the previously set
>> encap. parameters. The suggested flow print helper can't reliably print
>> these two components ("set vxlan" and the flow rule itself) as they
>> belong to different testpmd command strings.
>>
> Again, I agree with you but like my above answer, do we want a partial solution
> in DPDK?
My answer is YES.
>> As you might see, completeness of the solution wouldn't necessarily be
>> reachable, even if full item/action coverage was provided.
>>
>> As for the item/action coverage itself, it's rather controversial. On
>> the one hand, yes, we should probably try to cover more items and
>> actions in the suggested patch, to the extent allowed by our current
>> priorities. But on the other hand, the existing coverage might not be
>> that poor: it's fairly elaborate and at least allows to print the most
>> common flow rules.
>>
> That is my main issue you are going to push something that is good for you
> and maybe some other cases, but it can't be used by all application, even with
> the most basic commands like encap.
Isn't it fair: if one wants to use something, be ready to help
and extend it. No pain, no gain :) Jokes aside - we're ready to
support "the most basic commands", just list it, but do not say
everything is basic. "everything" will delay the feature and
simply unfair to demand (IMHO).
IMHO, the feature would make flow API more friendly and easier
to debug, report bugs etc.
>> Yes, macros and some other cunning ways to cover more flow specifics
>> might come in handy, but, at the same time, can be rather error prone.
>> Sometimes it's more robust to just write the code out in full.
>>
> I'm always in favor of easy of extra complex but too hard is also not good.
>
> Thanks,
> Ori
>> Thank you.
>>
>> On 30/05/2021 10:27, Ori Kam wrote:
>>> Hi Ivan,
>>>
>>> First nice idea and thanks for the picking up the ball.
>>>
>>> Before a detail review,
>>> The main thing I'm concerned about is that this print will be partially
>> supported,
>>> I know that you covered this issue by printing unknown for unsupported
>> item/actions,
>>> but this will mean that it is enough that one item/action is not supported
>> and already the
>>> flow can't be used in testpmd.
>>> To get full support it means that the developer needs to add such print
>> with each new
>>> item/action. I agree it is possible, but it has high overhead for each feature.
>>>
>>> Maybe we should somehow create a macros for the prints or other easier
>> to support ways.
>>>
>>> For example, just printing the ipv4 has 7 function calls inside of it each one
>> with error checking,
>>> and I'm not counting the dedicated functions.
>>>
>>>
>>>
>>> Best,
>>> Ori
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ivan Malov <ivan.malov@oktetlabs.ru>
>>>> Sent: Thursday, May 27, 2021 11:25 AM
>>>> To: dev@dpdk.org
>>>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
>> Yigit
>>>> <ferruh.yigit@intel.com>; Andrew Rybchenko
>>>> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
>>>> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
>>>> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow
>> rule
>>>> dumping
>>>>
>>>> DPDK applications (for example, OvS) or tests which use RTE flow API
>> need to
>>>> log created or rejected flow rules to help to recognise what goes right or
>>>> wrong. From this standpoint, testpmd-compliant format is nice for the
>>>> purpose because it allows to copy-paste the flow rules and debug using
>>>> testpmd.
>>>>
>>>> Recognisable pattern items:
>>>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
>> VXLAN,
>>>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>>>>
>>>> Recognisable actions:
>>>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
>> PHY_PORT,
>>>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
>>>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>>>>
>>>> Recognisable RSS types (action RSS):
>>>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
>>>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
>>>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
>>>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
>> L4_DST_ONLY.
>>>>
>>>> Unrecognised parts of the flow specification are represented by tokens
>>>> "{unknown}" and "{unknown bits}". Interested parties are welcome to
>>>> extend this tool to recognise more items and actions.
>>>>
>>>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-30 7:27 ` Ori Kam
2021-05-31 2:28 ` Stephen Hemminger
2021-06-01 14:08 ` Ivan Malov
@ 2021-06-02 20:48 ` Stephen Hemminger
2 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2021-06-02 20:48 UTC (permalink / raw)
To: Ori Kam
Cc: Ivan Malov, dev, NBU-Contact-Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, Ray Kinsella, Neil Horman
On Sun, 30 May 2021 07:27:32 +0000
Ori Kam <orika@nvidia.com> wrote:
> > + retv = snprintf(buf + *nb_chars_total, write_size_max,
> > + " %02x:%02x:%02x:%02x:%02x:%02x",
> > + ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
please use existing rte_ether_format_addr instead of another copy of it.
> +static int
> +rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const void *value_ptr)
> +{
> + SNPRINT(uint32_t, " 0x%08x");
> +}
Why not use "%#"PRIx32 to be safe against sizes?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-02 13:49 ` Andrew Rybchenko
@ 2021-06-03 8:25 ` Ori Kam
2021-06-03 8:43 ` Andrew Rybchenko
0 siblings, 1 reply; 16+ messages in thread
From: Ori Kam @ 2021-06-03 8:25 UTC (permalink / raw)
To: Andrew Rybchenko, Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Ray Kinsella, Neil Horman
Hi Andrew,
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> Hi Ori,
>
> On 6/2/21 4:32 PM, Ori Kam wrote:
> > Hi Ivan,
> >
> >> -----Original Message-----
> >> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
> >>
> >> Hi Ori,
> >>
> >> Your review efforts are much appreciated. I understand your concern
> >> about the partial item/action coverage, but there are some points to
> >> be considered when addressing it:
> >> - It's anyway hardly possible to use the printed flow directly in
> >> testpmd if it contains "opaque", or "PMD-specific", items/actions in
> >> terms of the tunnel offload model. These items/actions have to be
> >> omitted when printing the flow, and their absence in the resulting
> >> string means that copy/pasting the flow to testpmd isn't helpful in
> >> this particular case.
> > I fully agree with you that some of the rules can't be printed. That is why.
> > I'm not sure having partial solution is the way to go.
>
> Sorry, I disagree that possibility to cover 99% and impossibility to cover just
> 1% is the reason to discard.
>
I agree with you that 99% is better than 0 😊 but is this patch 99%?
maybe we can agree even if it is 70% it is good for this patch.
> > If OVS for example cares about
> > some of the item/action, maybe this log should be on their part.
>
> OvS does and as far as I can see already has bugs there.
> Of course, nobody says that it is testpmd-complient format, but it definitely
> looks so.
>
> Anyway, it sounds strange do duplicate the functionality in many-many DPDK
> apps. Of course, it removes the headache from DPDK maintainers, but it is
> hardly friendly to DPDK community in general.
>
Fully agree with you that if some feature is used by number of applications, then it is better
or at least nicer to have it in DPDK, but my question is that, the current patch is for the OVS use case
from my understanding and not considering any other application. So, in this case
do we want it in DPDK?
> >> - There's action ENCAP which also can't be fully represented by the
> >> tool in question, simply because it has no parameters. In tespmd, one
> >> first has to issue "set vxlan" command to configure the encap.
> >> header, whilst "vxlan" token in the flow rule string just refers to
> >> the previously set encap. parameters. The suggested flow print helper
> >> can't reliably print these two components ("set vxlan" and the flow
> >> rule itself) as they belong to different testpmd command strings.
> >>
> > Again, I agree with you but like my above answer, do we want a partial
> > solution in DPDK?
>
> My answer is YES.
>
I can live with such decision but like I said above it depends on the use case
and how partial it is.
> >> As you might see, completeness of the solution wouldn't necessarily
> >> be reachable, even if full item/action coverage was provided.
> >>
> >> As for the item/action coverage itself, it's rather controversial. On
> >> the one hand, yes, we should probably try to cover more items and
> >> actions in the suggested patch, to the extent allowed by our current
> >> priorities. But on the other hand, the existing coverage might not be
> >> that poor: it's fairly elaborate and at least allows to print the
> >> most common flow rules.
> >>
> > That is my main issue you are going to push something that is good for
> > you and maybe some other cases, but it can't be used by all
> > application, even with the most basic commands like encap.
>
> Isn't it fair: if one wants to use something, be ready to help and extend it. No
> pain, no gain :) Jokes aside - we're ready to support "the most basic
> commands", just list it, but do not say everything is basic. "everything" will
> delay the feature and simply unfair to demand (IMHO).
>
> IMHO, the feature would make flow API more friendly and easier to debug,
> report bugs etc.
>
I fully agree that if someone wants functionality, he should work for it.
But as a developer of rte_flow and maintainer I need to ask who will add the new
features/missing features? Should we enforce that each developer when coding a new
item/action will add it to the print?
Or just users that care about such log will add it?
To summarize.
I think the following question must be answer before deciding:
1. how many apps are going to use this feature?
2. it the coverage sufficient?
3. who is responsible to update it? (each developer/the interested party member?)
Best,
Ori
> >> Yes, macros and some other cunning ways to cover more flow specifics
> >> might come in handy, but, at the same time, can be rather error prone.
> >> Sometimes it's more robust to just write the code out in full.
> >>
> > I'm always in favor of easy of extra complex but too hard is also not good.
> >
> > Thanks,
> > Ori
> >> Thank you.
> >>
> >> On 30/05/2021 10:27, Ori Kam wrote:
> >>> Hi Ivan,
> >>>
> >>> First nice idea and thanks for the picking up the ball.
> >>>
> >>> Before a detail review,
> >>> The main thing I'm concerned about is that this print will be
> >>> partially
> >> supported,
> >>> I know that you covered this issue by printing unknown for
> >>> unsupported
> >> item/actions,
> >>> but this will mean that it is enough that one item/action is not
> >>> supported
> >> and already the
> >>> flow can't be used in testpmd.
> >>> To get full support it means that the developer needs to add such
> >>> print
> >> with each new
> >>> item/action. I agree it is possible, but it has high overhead for each
> feature.
> >>>
> >>> Maybe we should somehow create a macros for the prints or other
> >>> easier
> >> to support ways.
> >>>
> >>> For example, just printing the ipv4 has 7 function calls inside of
> >>> it each one
> >> with error checking,
> >>> and I'm not counting the dedicated functions.
> >>>
> >>>
> >>>
> >>> Best,
> >>> Ori
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Ivan Malov <ivan.malov@oktetlabs.ru>
> >>>> Sent: Thursday, May 27, 2021 11:25 AM
> >>>> To: dev@dpdk.org
> >>>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
> >> Yigit
> >>>> <ferruh.yigit@intel.com>; Andrew Rybchenko
> >>>> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
> >>>> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> >>>> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow
> >> rule
> >>>> dumping
> >>>>
> >>>> DPDK applications (for example, OvS) or tests which use RTE flow
> >>>> API
> >> need to
> >>>> log created or rejected flow rules to help to recognise what goes
> >>>> right or wrong. From this standpoint, testpmd-compliant format is
> >>>> nice for the purpose because it allows to copy-paste the flow rules
> >>>> and debug using testpmd.
> >>>>
> >>>> Recognisable pattern items:
> >>>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
> >> VXLAN,
> >>>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >>>>
> >>>> Recognisable actions:
> >>>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
> >> PHY_PORT,
> >>>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> >>>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >>>>
> >>>> Recognisable RSS types (action RSS):
> >>>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> >>>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> >>>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX,
> IPV6_TCP_EX,
> >>>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
> >> L4_DST_ONLY.
> >>>>
> >>>> Unrecognised parts of the flow specification are represented by
> >>>> tokens "{unknown}" and "{unknown bits}". Interested parties are
> >>>> welcome to extend this tool to recognise more items and actions.
> >>>>
> >>>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-03 8:25 ` Ori Kam
@ 2021-06-03 8:43 ` Andrew Rybchenko
2021-06-03 9:27 ` Ori Kam
0 siblings, 1 reply; 16+ messages in thread
From: Andrew Rybchenko @ 2021-06-03 8:43 UTC (permalink / raw)
To: Ori Kam, Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Ray Kinsella,
Neil Horman, Eli Britstein, Ilya Maximets
Hi Ori,
Cc Eli and Ilya since I think OvS could be interested in the
feature.
On 6/3/21 11:25 AM, Ori Kam wrote:
> Hi Andrew,
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>> Hi Ori,
>>
>> On 6/2/21 4:32 PM, Ori Kam wrote:
>>> Hi Ivan,
>>>
>>>> -----Original Message-----
>>>> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
>>>>
>>>> Hi Ori,
>>>>
>>>> Your review efforts are much appreciated. I understand your concern
>>>> about the partial item/action coverage, but there are some points to
>>>> be considered when addressing it:
>>>> - It's anyway hardly possible to use the printed flow directly in
>>>> testpmd if it contains "opaque", or "PMD-specific", items/actions in
>>>> terms of the tunnel offload model. These items/actions have to be
>>>> omitted when printing the flow, and their absence in the resulting
>>>> string means that copy/pasting the flow to testpmd isn't helpful in
>>>> this particular case.
>>> I fully agree with you that some of the rules can't be printed. That is why.
>>> I'm not sure having partial solution is the way to go.
>>
>> Sorry, I disagree that possibility to cover 99% and impossibility to cover just
>> 1% is the reason to discard.
>>
>
> I agree with you that 99% is better than 0 😊 but is this patch 99%?
> maybe we can agree even if it is 70% it is good for this patch.
Hold on. Here we're talking about a theoretical possibility to
cover 99%. Coverage in this patch is discussed below in terms
of "the most basic commands".
.
>>> If OVS for example cares about
>>> some of the item/action, maybe this log should be on their part.
>>
>> OvS does and as far as I can see already has bugs there.
>> Of course, nobody says that it is testpmd-complient format, but it definitely
>> looks so.
>>
>> Anyway, it sounds strange do duplicate the functionality in many-many DPDK
>> apps. Of course, it removes the headache from DPDK maintainers, but it is
>> hardly friendly to DPDK community in general.
>>
>
> Fully agree with you that if some feature is used by number of applications, then it is better
> or at least nicer to have it in DPDK, but my question is that, the current patch is for the OVS use case
> from my understanding and not considering any other application. So, in this case
> do we want it in DPDK?
The primary goal, in fact, is our testing harness :)
OvS is just an open source example. We could easily add
it to our code but decided that it would be useful in
DPDK since seen such messages in OvS logs.
>>>> - There's action ENCAP which also can't be fully represented by the
>>>> tool in question, simply because it has no parameters. In tespmd, one
>>>> first has to issue "set vxlan" command to configure the encap.
>>>> header, whilst "vxlan" token in the flow rule string just refers to
>>>> the previously set encap. parameters. The suggested flow print helper
>>>> can't reliably print these two components ("set vxlan" and the flow
>>>> rule itself) as they belong to different testpmd command strings.
>>>>
>>> Again, I agree with you but like my above answer, do we want a partial
>>> solution in DPDK?
>>
>> My answer is YES.
>>
>
> I can live with such decision but like I said above it depends on the use case
> and how partial it is.
See above.
>>>> As you might see, completeness of the solution wouldn't necessarily
>>>> be reachable, even if full item/action coverage was provided.
>>>>
>>>> As for the item/action coverage itself, it's rather controversial. On
>>>> the one hand, yes, we should probably try to cover more items and
>>>> actions in the suggested patch, to the extent allowed by our current
>>>> priorities. But on the other hand, the existing coverage might not be
>>>> that poor: it's fairly elaborate and at least allows to print the
>>>> most common flow rules.
>>>>
>>> That is my main issue you are going to push something that is good for
>>> you and maybe some other cases, but it can't be used by all
>>> application, even with the most basic commands like encap.
>>
>> Isn't it fair: if one wants to use something, be ready to help and extend it. No
>> pain, no gain :) Jokes aside - we're ready to support "the most basic
>> commands", just list it, but do not say everything is basic. "everything" will
>> delay the feature and simply unfair to demand (IMHO).
>>
>> IMHO, the feature would make flow API more friendly and easier to debug,
>> report bugs etc.
>>
> I fully agree that if someone wants functionality, he should work for it.
> But as a developer of rte_flow and maintainer I need to ask who will add the new
> features/missing features? Should we enforce that each developer when coding a new
> item/action will add it to the print?
> Or just users that care about such log will add it?
It is a good and valid questions. First of all we can help
with (or just completely take it) to maintain the file.
Second basically any option from above is OK for me.
My personal preference would be to require implementation
for new RTE flow features. In fact, testpmd may start to
use it to list created rules etc.
We'll try to make it easier to add new items and actions
support.
> To summarize.
> I think the following question must be answer before deciding:
> 1. how many apps are going to use this feature?
I'll keep OvS maintainers to answer if OvS would like to use
it. We'll definitely use it (either from DPDK or from internal
code base if it is not accepted), but I guess not open
source projects may be not taken into account.
> 2. it the coverage sufficient?
I hope yes, since it tries to warn about not supported
features. I.e. it does not lie simply skipping not supported
bits.
> 3. who is responsible to update it? (each developer/the interested party member?)
See above.
I hope to see more answers here.
We'll update it when we need more items/actions to dump.
Thanks,
Andrew.
> Best,
> Ori
>
>>>> Yes, macros and some other cunning ways to cover more flow specifics
>>>> might come in handy, but, at the same time, can be rather error prone.
>>>> Sometimes it's more robust to just write the code out in full.
>>>>
>>> I'm always in favor of easy of extra complex but too hard is also not good.
>>>
>>> Thanks,
>>> Ori
>>>> Thank you.
>>>>
>>>> On 30/05/2021 10:27, Ori Kam wrote:
>>>>> Hi Ivan,
>>>>>
>>>>> First nice idea and thanks for the picking up the ball.
>>>>>
>>>>> Before a detail review,
>>>>> The main thing I'm concerned about is that this print will be
>>>>> partially
>>>> supported,
>>>>> I know that you covered this issue by printing unknown for
>>>>> unsupported
>>>> item/actions,
>>>>> but this will mean that it is enough that one item/action is not
>>>>> supported
>>>> and already the
>>>>> flow can't be used in testpmd.
>>>>> To get full support it means that the developer needs to add such
>>>>> print
>>>> with each new
>>>>> item/action. I agree it is possible, but it has high overhead for each
>> feature.
>>>>>
>>>>> Maybe we should somehow create a macros for the prints or other
>>>>> easier
>>>> to support ways.
>>>>>
>>>>> For example, just printing the ipv4 has 7 function calls inside of
>>>>> it each one
>>>> with error checking,
>>>>> and I'm not counting the dedicated functions.
>>>>>
>>>>>
>>>>>
>>>>> Best,
>>>>> Ori
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Ivan Malov <ivan.malov@oktetlabs.ru>
>>>>>> Sent: Thursday, May 27, 2021 11:25 AM
>>>>>> To: dev@dpdk.org
>>>>>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
>>>> Yigit
>>>>>> <ferruh.yigit@intel.com>; Andrew Rybchenko
>>>>>> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
>>>>>> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
>>>>>> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow
>>>> rule
>>>>>> dumping
>>>>>>
>>>>>> DPDK applications (for example, OvS) or tests which use RTE flow
>>>>>> API
>>>> need to
>>>>>> log created or rejected flow rules to help to recognise what goes
>>>>>> right or wrong. From this standpoint, testpmd-compliant format is
>>>>>> nice for the purpose because it allows to copy-paste the flow rules
>>>>>> and debug using testpmd.
>>>>>>
>>>>>> Recognisable pattern items:
>>>>>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
>>>> VXLAN,
>>>>>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>>>>>>
>>>>>> Recognisable actions:
>>>>>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
>>>> PHY_PORT,
>>>>>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
>>>>>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>>>>>>
>>>>>> Recognisable RSS types (action RSS):
>>>>>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
>>>>>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
>>>>>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX,
>> IPV6_TCP_EX,
>>>>>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
>>>> L4_DST_ONLY.
>>>>>>
>>>>>> Unrecognised parts of the flow specification are represented by
>>>>>> tokens "{unknown}" and "{unknown bits}". Interested parties are
>>>>>> welcome to extend this tool to recognise more items and actions.
>>>>>>
>>>>>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-03 8:43 ` Andrew Rybchenko
@ 2021-06-03 9:27 ` Ori Kam
2023-03-23 12:54 ` Ferruh Yigit
0 siblings, 1 reply; 16+ messages in thread
From: Ori Kam @ 2021-06-03 9:27 UTC (permalink / raw)
To: Andrew Rybchenko, Ivan Malov, dev
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Ray Kinsella,
Neil Horman, Eli Britstein, Ilya Maximets
Hi Andrew,
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> Hi Ori,
>
> Cc Eli and Ilya since I think OvS could be interested in the feature.
>
> On 6/3/21 11:25 AM, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>
> >> Hi Ori,
> >>
> >> On 6/2/21 4:32 PM, Ori Kam wrote:
> >>> Hi Ivan,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
> >>>>
> >>>> Hi Ori,
> >>>>
> >>>> Your review efforts are much appreciated. I understand your concern
> >>>> about the partial item/action coverage, but there are some points
> >>>> to be considered when addressing it:
> >>>> - It's anyway hardly possible to use the printed flow directly in
> >>>> testpmd if it contains "opaque", or "PMD-specific", items/actions
> >>>> in terms of the tunnel offload model. These items/actions have to
> >>>> be omitted when printing the flow, and their absence in the
> >>>> resulting string means that copy/pasting the flow to testpmd isn't
> >>>> helpful in this particular case.
> >>> I fully agree with you that some of the rules can't be printed. That is why.
> >>> I'm not sure having partial solution is the way to go.
> >>
> >> Sorry, I disagree that possibility to cover 99% and impossibility to
> >> cover just 1% is the reason to discard.
> >>
> >
> > I agree with you that 99% is better than 0 😊 but is this patch 99%?
> > maybe we can agree even if it is 70% it is good for this patch.
>
> Hold on. Here we're talking about a theoretical possibility to cover 99%.
> Coverage in this patch is discussed below in terms of "the most basic
> commands".
I know that that is why I said 70%.
> .
>
> >>> If OVS for example cares about
> >>> some of the item/action, maybe this log should be on their part.
> >>
> >> OvS does and as far as I can see already has bugs there.
> >> Of course, nobody says that it is testpmd-complient format, but it
> >> definitely looks so.
> >>
> >> Anyway, it sounds strange do duplicate the functionality in many-many
> >> DPDK apps. Of course, it removes the headache from DPDK maintainers,
> >> but it is hardly friendly to DPDK community in general.
> >>
> >
> > Fully agree with you that if some feature is used by number of
> > applications, then it is better or at least nicer to have it in DPDK,
> > but my question is that, the current patch is for the OVS use case
> > from my understanding and not considering any other application. So, in this
> case do we want it in DPDK?
>
> The primary goal, in fact, is our testing harness :) OvS is just an open source
> example. We could easily add it to our code but decided that it would be useful
> in DPDK since seen such messages in OvS logs.
>
Private application are also good examples from my point of view.
> >>>> - There's action ENCAP which also can't be fully represented by the
> >>>> tool in question, simply because it has no parameters. In tespmd,
> >>>> one first has to issue "set vxlan" command to configure the encap.
> >>>> header, whilst "vxlan" token in the flow rule string just refers to
> >>>> the previously set encap. parameters. The suggested flow print
> >>>> helper can't reliably print these two components ("set vxlan" and
> >>>> the flow rule itself) as they belong to different testpmd command strings.
> >>>>
> >>> Again, I agree with you but like my above answer, do we want a
> >>> partial solution in DPDK?
> >>
> >> My answer is YES.
> >>
> >
> > I can live with such decision but like I said above it depends on the
> > use case and how partial it is.
>
> See above.
>
> >>>> As you might see, completeness of the solution wouldn't necessarily
> >>>> be reachable, even if full item/action coverage was provided.
> >>>>
> >>>> As for the item/action coverage itself, it's rather controversial.
> >>>> On the one hand, yes, we should probably try to cover more items
> >>>> and actions in the suggested patch, to the extent allowed by our
> >>>> current priorities. But on the other hand, the existing coverage
> >>>> might not be that poor: it's fairly elaborate and at least allows
> >>>> to print the most common flow rules.
> >>>>
> >>> That is my main issue you are going to push something that is good
> >>> for you and maybe some other cases, but it can't be used by all
> >>> application, even with the most basic commands like encap.
> >>
> >> Isn't it fair: if one wants to use something, be ready to help and
> >> extend it. No pain, no gain :) Jokes aside - we're ready to support
> >> "the most basic commands", just list it, but do not say everything is
> >> basic. "everything" will delay the feature and simply unfair to demand
> (IMHO).
> >>
> >> IMHO, the feature would make flow API more friendly and easier to
> >> debug, report bugs etc.
> >>
> > I fully agree that if someone wants functionality, he should work for it.
> > But as a developer of rte_flow and maintainer I need to ask who will
> > add the new features/missing features? Should we enforce that each
> > developer when coding a new item/action will add it to the print?
> > Or just users that care about such log will add it?
>
> It is a good and valid questions. First of all we can help with (or just completely
> take it) to maintain the file.
The issue is not the maintaining of the file is the extra work required for each new feature.
and what do we do with features that are hard to print for example encap data?
> Second basically any option from above is OK for me.
> My personal preference would be to require implementation for new RTE flow
> features. In fact, testpmd may start to use it to list created rules etc.
> We'll try to make it easier to add new items and actions support.
>
I also think that the best is that all new features will be added to this print
but the requirement is that adding new this code will not have high overhead.
If we can find and easy way to do it, I think this will be perfect.
> > To summarize.
> > I think the following question must be answer before deciding:
> > 1. how many apps are going to use this feature?
>
> I'll keep OvS maintainers to answer if OvS would like to use it. We'll definitely
> use it (either from DPDK or from internal code base if it is not accepted), but I
> guess not open source projects may be not taken into account.
>
Agree lets see the application programmers view point on this.
> > 2. it the coverage sufficient?
>
> I hope yes, since it tries to warn about not supported features. I.e. it does not lie
> simply skipping not supported bits.
>
But then you can't copy paste it to testpmd and test, which I think is a very good
why to debug issues that costumers may find.
> > 3. who is responsible to update it? (each developer/the interested
> > party member?)
>
> See above.
>
> I hope to see more answers here.
>
+1
Best,
Ori
> We'll update it when we need more items/actions to dump.
>
> Thanks,
> Andrew.
>
> > Best,
> > Ori
> >
> >>>> Yes, macros and some other cunning ways to cover more flow
> >>>> specifics might come in handy, but, at the same time, can be rather error
> prone.
> >>>> Sometimes it's more robust to just write the code out in full.
> >>>>
> >>> I'm always in favor of easy of extra complex but too hard is also not good.
> >>>
> >>> Thanks,
> >>> Ori
> >>>> Thank you.
> >>>>
> >>>> On 30/05/2021 10:27, Ori Kam wrote:
> >>>>> Hi Ivan,
> >>>>>
> >>>>> First nice idea and thanks for the picking up the ball.
> >>>>>
> >>>>> Before a detail review,
> >>>>> The main thing I'm concerned about is that this print will be
> >>>>> partially
> >>>> supported,
> >>>>> I know that you covered this issue by printing unknown for
> >>>>> unsupported
> >>>> item/actions,
> >>>>> but this will mean that it is enough that one item/action is not
> >>>>> supported
> >>>> and already the
> >>>>> flow can't be used in testpmd.
> >>>>> To get full support it means that the developer needs to add such
> >>>>> print
> >>>> with each new
> >>>>> item/action. I agree it is possible, but it has high overhead for
> >>>>> each
> >> feature.
> >>>>>
> >>>>> Maybe we should somehow create a macros for the prints or other
> >>>>> easier
> >>>> to support ways.
> >>>>>
> >>>>> For example, just printing the ipv4 has 7 function calls inside of
> >>>>> it each one
> >>>> with error checking,
> >>>>> and I'm not counting the dedicated functions.
> >>>>>
> >>>>>
> >>>>>
> >>>>> Best,
> >>>>> Ori
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Ivan Malov <ivan.malov@oktetlabs.ru>
> >>>>>> Sent: Thursday, May 27, 2021 11:25 AM
> >>>>>> To: dev@dpdk.org
> >>>>>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
> >>>> Yigit
> >>>>>> <ferruh.yigit@intel.com>; Andrew Rybchenko
> >>>>>> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
> >>>>>> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> >>>>>> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant
> >>>>>> flow
> >>>> rule
> >>>>>> dumping
> >>>>>>
> >>>>>> DPDK applications (for example, OvS) or tests which use RTE flow
> >>>>>> API
> >>>> need to
> >>>>>> log created or rejected flow rules to help to recognise what goes
> >>>>>> right or wrong. From this standpoint, testpmd-compliant format is
> >>>>>> nice for the purpose because it allows to copy-paste the flow
> >>>>>> rules and debug using testpmd.
> >>>>>>
> >>>>>> Recognisable pattern items:
> >>>>>> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
> >>>> VXLAN,
> >>>>>> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >>>>>>
> >>>>>> Recognisable actions:
> >>>>>> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
> >>>> PHY_PORT,
> >>>>>> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> >>>>>> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >>>>>>
> >>>>>> Recognisable RSS types (action RSS):
> >>>>>> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> >>>>>> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> >>>>>> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX,
> >> IPV6_TCP_EX,
> >>>>>> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
> >>>> L4_DST_ONLY.
> >>>>>>
> >>>>>> Unrecognised parts of the flow specification are represented by
> >>>>>> tokens "{unknown}" and "{unknown bits}". Interested parties are
> >>>>>> welcome to extend this tool to recognise more items and actions.
> >>>>>>
> >>>>>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-05-27 8:25 [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping Ivan Malov
2021-05-30 7:27 ` Ori Kam
@ 2021-06-14 12:42 ` Singh, Aman Deep
2021-06-17 8:11 ` Singh, Aman Deep
2021-07-02 10:20 ` Andrew Rybchenko
1 sibling, 2 replies; 16+ messages in thread
From: Singh, Aman Deep @ 2021-06-14 12:42 UTC (permalink / raw)
To: Ivan Malov, dev
Cc: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ori Kam,
Ray Kinsella, Neil Horman
Hi Ivan,
As a suggestion, can we add a check for debug log_level in
"rte_flow_snprint" itself.
So we can avoid CPU time, in cases where we don't want these logs.
On 5/27/2021 1:55 PM, Ivan Malov wrote:
> DPDK applications (for example, OvS) or tests which use RTE
> flow API need to log created or rejected flow rules to help
> to recognise what goes right or wrong. From this standpoint,
> testpmd-compliant format is nice for the purpose because it
> allows to copy-paste the flow rules and debug using testpmd.
>
> Recognisable pattern items:
> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP,
> TCP, VXLAN, NVGRE, GENEVE, MARK, PPPOES, PPPOED.
>
> Recognisable actions:
> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF, PHY_PORT,
> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
>
> Recognisable RSS types (action RSS):
> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP, NONFRAG_IPV4_OTHER,
> IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP, NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER,
> IPV6_EX, IPV6_TCP_EX, IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY,
> L4_SRC_ONLY, L4_DST_ONLY.
>
> Unrecognised parts of the flow specification are represented by
> tokens "{unknown}" and "{unknown bits}". Interested parties are
> welcome to extend this tool to recognise more items and actions.
>
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
>
> +int
> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[])
> +{
> + int rc;
> +
Like add a check with "rte_log_can_log()" and return from here.
> + if (buf == NULL && buf_size != 0)
> + return -EINVAL;
> +
> + *nb_chars_total = 0;
> +
> + rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total, pattern);
> + if (rc != 0)
> + return rc;
> +
> + rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total, actions);
> + if (rc != 0)
> + return rc;
> +
> + return 0;
> +}
Regards
Aman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-14 12:42 ` Singh, Aman Deep
@ 2021-06-17 8:11 ` Singh, Aman Deep
2021-07-02 10:20 ` Andrew Rybchenko
1 sibling, 0 replies; 16+ messages in thread
From: Singh, Aman Deep @ 2021-06-17 8:11 UTC (permalink / raw)
To: Ivan Malov, dev
Hi Ivan,
Another option is to log these structures values (rte_flow_attr ,
rte_flow_item, rte_flow_action) as it is.
Latter run a parsing script on these logs, to convets them into
meaningful flow items.
The completeness of the solution will depend on how well script is
maintained.
Regards
Aman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-14 12:42 ` Singh, Aman Deep
2021-06-17 8:11 ` Singh, Aman Deep
@ 2021-07-02 10:20 ` Andrew Rybchenko
1 sibling, 0 replies; 16+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 10:20 UTC (permalink / raw)
To: Singh, Aman Deep, Ivan Malov, dev
Cc: Thomas Monjalon, Ferruh Yigit, Ori Kam, Ray Kinsella, Neil Horman
Hi Aman,
On 6/14/21 3:42 PM, Singh, Aman Deep wrote:
> Hi Ivan,
>
> As a suggestion, can we add a check for debug log_level in
> "rte_flow_snprint" itself.
I see the reason, but I think it a bad idea to put the check
inside the function. If calling function cares, it should do
the check.
> So we can avoid CPU time, in cases where we don't want these logs.
[snip]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
2021-06-03 9:27 ` Ori Kam
@ 2023-03-23 12:54 ` Ferruh Yigit
0 siblings, 0 replies; 16+ messages in thread
From: Ferruh Yigit @ 2023-03-23 12:54 UTC (permalink / raw)
To: Ori Kam, Andrew Rybchenko, Ivan Malov, dev, Aman Singh
Cc: NBU-Contact-Thomas Monjalon, Ferruh Yigit, Ray Kinsella,
Neil Horman, Eli Britstein, Ilya Maximets
On 6/3/2021 10:27 AM, Ori Kam wrote:
> Hi Andrew,
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>> Hi Ori,
>>
>> Cc Eli and Ilya since I think OvS could be interested in the feature.
>>
>> On 6/3/21 11:25 AM, Ori Kam wrote:
>>> Hi Andrew,
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>
>>>> Hi Ori,
>>>>
>>>> On 6/2/21 4:32 PM, Ori Kam wrote:
>>>>> Hi Ivan,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
>>>>>>
>>>>>> Hi Ori,
>>>>>>
>>>>>> Your review efforts are much appreciated. I understand your concern
>>>>>> about the partial item/action coverage, but there are some points
>>>>>> to be considered when addressing it:
>>>>>> - It's anyway hardly possible to use the printed flow directly in
>>>>>> testpmd if it contains "opaque", or "PMD-specific", items/actions
>>>>>> in terms of the tunnel offload model. These items/actions have to
>>>>>> be omitted when printing the flow, and their absence in the
>>>>>> resulting string means that copy/pasting the flow to testpmd isn't
>>>>>> helpful in this particular case.
>>>>> I fully agree with you that some of the rules can't be printed. That is why.
>>>>> I'm not sure having partial solution is the way to go.
>>>>
>>>> Sorry, I disagree that possibility to cover 99% and impossibility to
>>>> cover just 1% is the reason to discard.
>>>>
>>>
>>> I agree with you that 99% is better than 0 😊 but is this patch 99%?
>>> maybe we can agree even if it is 70% it is good for this patch.
>>
>> Hold on. Here we're talking about a theoretical possibility to cover 99%.
>> Coverage in this patch is discussed below in terms of "the most basic
>> commands".
>
> I know that that is why I said 70%.
>
>> .
>>
>>>>> If OVS for example cares about
>>>>> some of the item/action, maybe this log should be on their part.
>>>>
>>>> OvS does and as far as I can see already has bugs there.
>>>> Of course, nobody says that it is testpmd-complient format, but it
>>>> definitely looks so.
>>>>
>>>> Anyway, it sounds strange do duplicate the functionality in many-many
>>>> DPDK apps. Of course, it removes the headache from DPDK maintainers,
>>>> but it is hardly friendly to DPDK community in general.
>>>>
>>>
>>> Fully agree with you that if some feature is used by number of
>>> applications, then it is better or at least nicer to have it in DPDK,
>>> but my question is that, the current patch is for the OVS use case
>>> from my understanding and not considering any other application. So, in this
>> case do we want it in DPDK?
>>
>> The primary goal, in fact, is our testing harness :) OvS is just an open source
>> example. We could easily add it to our code but decided that it would be useful
>> in DPDK since seen such messages in OvS logs.
>>
> Private application are also good examples from my point of view.
>
>>>>>> - There's action ENCAP which also can't be fully represented by the
>>>>>> tool in question, simply because it has no parameters. In tespmd,
>>>>>> one first has to issue "set vxlan" command to configure the encap.
>>>>>> header, whilst "vxlan" token in the flow rule string just refers to
>>>>>> the previously set encap. parameters. The suggested flow print
>>>>>> helper can't reliably print these two components ("set vxlan" and
>>>>>> the flow rule itself) as they belong to different testpmd command strings.
>>>>>>
>>>>> Again, I agree with you but like my above answer, do we want a
>>>>> partial solution in DPDK?
>>>>
>>>> My answer is YES.
>>>>
>>>
>>> I can live with such decision but like I said above it depends on the
>>> use case and how partial it is.
>>
>> See above.
>>
>>>>>> As you might see, completeness of the solution wouldn't necessarily
>>>>>> be reachable, even if full item/action coverage was provided.
>>>>>>
>>>>>> As for the item/action coverage itself, it's rather controversial.
>>>>>> On the one hand, yes, we should probably try to cover more items
>>>>>> and actions in the suggested patch, to the extent allowed by our
>>>>>> current priorities. But on the other hand, the existing coverage
>>>>>> might not be that poor: it's fairly elaborate and at least allows
>>>>>> to print the most common flow rules.
>>>>>>
>>>>> That is my main issue you are going to push something that is good
>>>>> for you and maybe some other cases, but it can't be used by all
>>>>> application, even with the most basic commands like encap.
>>>>
>>>> Isn't it fair: if one wants to use something, be ready to help and
>>>> extend it. No pain, no gain :) Jokes aside - we're ready to support
>>>> "the most basic commands", just list it, but do not say everything is
>>>> basic. "everything" will delay the feature and simply unfair to demand
>> (IMHO).
>>>>
>>>> IMHO, the feature would make flow API more friendly and easier to
>>>> debug, report bugs etc.
>>>>
>>> I fully agree that if someone wants functionality, he should work for it.
>>> But as a developer of rte_flow and maintainer I need to ask who will
>>> add the new features/missing features? Should we enforce that each
>>> developer when coding a new item/action will add it to the print?
>>> Or just users that care about such log will add it?
>>
>> It is a good and valid questions. First of all we can help with (or just completely
>> take it) to maintain the file.
>
> The issue is not the maintaining of the file is the extra work required for each new feature.
> and what do we do with features that are hard to print for example encap data?
>
>> Second basically any option from above is OK for me.
>> My personal preference would be to require implementation for new RTE flow
>> features. In fact, testpmd may start to use it to list created rules etc.
>> We'll try to make it easier to add new items and actions support.
>>
> I also think that the best is that all new features will be added to this print
> but the requirement is that adding new this code will not have high overhead.
> If we can find and easy way to do it, I think this will be perfect.
>
>>> To summarize.
>>> I think the following question must be answer before deciding:
>>> 1. how many apps are going to use this feature?
>>
>> I'll keep OvS maintainers to answer if OvS would like to use it. We'll definitely
>> use it (either from DPDK or from internal code base if it is not accepted), but I
>> guess not open source projects may be not taken into account.
>>
> Agree lets see the application programmers view point on this.
>
>>> 2. it the coverage sufficient?
>>
>> I hope yes, since it tries to warn about not supported features. I.e. it does not lie
>> simply skipping not supported bits.
>>
>
> But then you can't copy paste it to testpmd and test, which I think is a very good
> why to debug issues that costumers may find.
>
>>> 3. who is responsible to update it? (each developer/the interested
>>> party member?)
>>
>> See above.
>>
>> I hope to see more answers here.
>>
> +1
>
> Best,
> Ori
>
This is an old patch still active in patchwork [1], lets try to conclude it.
Patch is to help debugging flow API, by printing flow API rule in
testpmd syntax so that issue can be reproduced/tested using testpmd.
This support comes with some amount of code and I agree with Ori that it
is a question who will add relevant code for future flow rules, and if
it worth the overhead it brings.
Also I have additional concern that how correct to associate this kind
of support to testpmd syntax which can change or go away easily, and add
code to ethdev library that has (logical) dependency to test application.
Considering patch is sitting without update for a while and concerns
above, I am rejecting this patch.
But Aman has a good suggestion in other thread, why not record flow
rule, and later parse it with a separate tool. This tool can generate
output in testpmd syntax as well as any other format desired.
With this ethdev code becomes very small and fixed size, and parsing
code (tool) decoupled from ethdev library.
If this is interesting to you, please start another thread to discuss or
send another RFC for it.
Thanks,
ferruh
[1]
https://patches.dpdk.org/project/dpdk/patch/20210527082504.3495-1-ivan.malov@oktetlabs.ru/
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-03-23 12:55 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27 8:25 [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping Ivan Malov
2021-05-30 7:27 ` Ori Kam
2021-05-31 2:28 ` Stephen Hemminger
2021-06-01 14:17 ` Ivan Malov
2021-06-01 15:10 ` Stephen Hemminger
2021-06-01 14:08 ` Ivan Malov
2021-06-02 13:32 ` Ori Kam
2021-06-02 13:49 ` Andrew Rybchenko
2021-06-03 8:25 ` Ori Kam
2021-06-03 8:43 ` Andrew Rybchenko
2021-06-03 9:27 ` Ori Kam
2023-03-23 12:54 ` Ferruh Yigit
2021-06-02 20:48 ` Stephen Hemminger
2021-06-14 12:42 ` Singh, Aman Deep
2021-06-17 8:11 ` Singh, Aman Deep
2021-07-02 10:20 ` Andrew Rybchenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).