DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ori Kam <orika@nvidia.com>
To: Ivan Malov <Ivan.Malov@oktetlabs.ru>, "dev@dpdk.org" <dev@dpdk.org>
Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	Ray Kinsella <mdr@ashroe.eu>, Neil Horman <nhorman@tuxdriver.com>
Subject: Re: [dpdk-dev] [RFC PATCH] ethdev: add support for testpmd-compliant flow rule dumping
Date: Wed, 2 Jun 2021 13:32:15 +0000	[thread overview]
Message-ID: <BL1PR12MB5335FE20DF4E61FE5AF9E696D63D9@BL1PR12MB5335.namprd12.prod.outlook.com> (raw)
In-Reply-To: <6175cb60-5d9a-832a-fa07-32326018661c@oktetlabs.ru>

Hi Ivan,

> -----Original Message-----
> From: Ivan Malov <Ivan.Malov@oktetlabs.ru>
> 
> Hi Ori,
> 
> Your review efforts are much appreciated. I understand your concern
> about the partial item/action coverage, but there are some points to be
> considered when addressing it:
> - It's anyway hardly possible to use the printed flow directly in
> testpmd if it contains "opaque", or "PMD-specific", items/actions in
> terms of the tunnel offload model. These items/actions have to be
> omitted when printing the flow, and their absence in the resulting
> string means that copy/pasting the flow to testpmd isn't helpful in this
> particular case.
I fully agree with you that some of the rules can't be printed. That is why.
I'm not sure having partial solution is the way to go. If OVS for example cares about
some of the item/action, maybe this log should be on their part.

> - There's action ENCAP which also can't be fully represented by the tool
> in question, simply because it has no parameters. In tespmd, one first
> has to issue "set vxlan" command to configure the encap. header, whilst
> "vxlan" token in the flow rule string just refers to the previously set
> encap. parameters. The suggested flow print helper can't reliably print
> these two components ("set vxlan" and the flow rule itself) as they
> belong to different testpmd command strings.
> 
Again, I agree with you but like my above answer, do we want a partial solution 
in DPDK?

> As you might see, completeness of the solution wouldn't necessarily be
> reachable, even if full item/action coverage was provided.
> 
> As for the item/action coverage itself, it's rather controversial. On
> the one hand, yes, we should probably try to cover more items and
> actions in the suggested patch, to the extent allowed by our current
> priorities. But on the other hand, the existing coverage might not be
> that poor: it's fairly elaborate and at least allows to print the most
> common flow rules.
> 
That is my main issue you are going to push something that is good for you
and maybe some other cases, but it can't be used by all application, even with
the most basic commands like encap.

> Yes, macros and some other cunning ways to cover more flow specifics
> might come in handy, but, at the same time, can be rather error prone.
> Sometimes it's more robust to just write the code out in full.
> 
I'm always in favor of easy of extra complex but too hard is also not good.

Thanks,
Ori
> Thank you.
> 
> On 30/05/2021 10:27, Ori Kam wrote:
> > Hi Ivan,
> >
> > First nice idea and thanks for the picking up the ball.
> >
> > Before a detail review,
> > The main thing I'm concerned about is that this print will be partially
> supported,
> > I know that you covered this issue by printing unknown for unsupported
> item/actions,
> > but this will mean that it is enough that one item/action is not supported
> and already the
> > flow can't be used in testpmd.
> > To get full support it means that the developer needs to add such print
> with each new
> > item/action. I agree it is possible, but it has high overhead for each feature.
> >
> > Maybe we should somehow create a macros for the prints or other easier
> to support ways.
> >
> > For example, just printing the ipv4 has 7 function calls inside of it each one
> with error checking,
> > and I'm not counting the dedicated functions.
> >
> >
> >
> > Best,
> > Ori
> >
> >
> >> -----Original Message-----
> >> From: Ivan Malov <ivan.malov@oktetlabs.ru>
> >> Sent: Thursday, May 27, 2021 11:25 AM
> >> To: dev@dpdk.org
> >> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh
> Yigit
> >> <ferruh.yigit@intel.com>; Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>; Ori Kam <orika@nvidia.com>; Ray
> >> Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> >> Subject: [RFC PATCH] ethdev: add support for testpmd-compliant flow
> rule
> >> dumping
> >>
> >> DPDK applications (for example, OvS) or tests which use RTE flow API
> need to
> >> log created or rejected flow rules to help to recognise what goes right or
> >> wrong. From this standpoint, testpmd-compliant format is nice for the
> >> purpose because it allows to copy-paste the flow rules and debug using
> >> testpmd.
> >>
> >> Recognisable pattern items:
> >> VOID, VF, PF, PHY_PORT, PORT_ID, ETH, VLAN, IPV4, IPV6, UDP, TCP,
> VXLAN,
> >> NVGRE, GENEVE, MARK, PPPOES, PPPOED.
> >>
> >> Recognisable actions:
> >> VOID, JUMP, MARK, FLAG, QUEUE, DROP, COUNT, RSS, PF, VF,
> PHY_PORT,
> >> PORT_ID, OF_POP_VLAN, OF_PUSH_VLAN, OF_SET_VLAN_VID,
> >> OF_SET_VLAN_PCP, VXLAN_ENCAP, VXLAN_DECAP.
> >>
> >> Recognisable RSS types (action RSS):
> >> IPV4, FRAG_IPV4, NONFRAG_IPV4_TCP, NONFRAG_IPV4_UDP,
> >> NONFRAG_IPV4_OTHER, IPV6, FRAG_IPV6, NONFRAG_IPV6_TCP,
> >> NONFRAG_IPV6_UDP, NONFRAG_IPV6_OTHER, IPV6_EX, IPV6_TCP_EX,
> >> IPV6_UDP_EX, L3_SRC_ONLY, L3_DST_ONLY, L4_SRC_ONLY,
> L4_DST_ONLY.
> >>
> >> Unrecognised parts of the flow specification are represented by tokens
> >> "{unknown}" and "{unknown bits}". Interested parties are welcome to
> >> extend this tool to recognise more items and actions.
> >>
> >> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> >> ---
> >>   lib/ethdev/meson.build        |    1 +
> >>   lib/ethdev/rte_flow.h         |   33 +
> >>   lib/ethdev/rte_flow_snprint.c | 1681
> >> +++++++++++++++++++++++++++++++++
> >>   lib/ethdev/version.map        |    3 +
> >>   4 files changed, 1718 insertions(+)
> >>   create mode 100644 lib/ethdev/rte_flow_snprint.c
> >>
> >> diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index
> >> 0205c853df..97bba4fa1b 100644
> >> --- a/lib/ethdev/meson.build
> >> +++ b/lib/ethdev/meson.build
> >> @@ -8,6 +8,7 @@ sources = files(
> >>           'rte_class_eth.c',
> >>           'rte_ethdev.c',
> >>           'rte_flow.c',
> >> +	'rte_flow_snprint.c',
> >>           'rte_mtr.c',
> >>           'rte_tm.c',
> >>   )
> >> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> >> 961a5884fe..cd5e9ef631 100644
> >> --- a/lib/ethdev/rte_flow.h
> >> +++ b/lib/ethdev/rte_flow.h
> >> @@ -4288,6 +4288,39 @@ rte_flow_tunnel_item_release(uint16_t
> port_id,
> >>   			     struct rte_flow_item *items,
> >>   			     uint32_t num_of_items,
> >>   			     struct rte_flow_error *error);
> >> +
> >> +/**
> >> + * @warning
> >> + * @b EXPERIMENTAL: this API may change without prior notice
> >> + *
> >> + * Dump testpmd-compliant textual representation of the flow rule.
> >> + * Invoke this with zero-size buffer to learn the string size and
> >> + * invoke this for the second time to actually dump the flow rule.
> >> + * The buffer size on the second invocation = the string size + 1.
> >> + *
> >> + * @param[out] buf
> >> + *   Buffer to save the dump in, or NULL
> >> + * @param buf_size
> >> + *   Buffer size, or 0
> >> + * @param[out] nb_chars_total
> >> + *   Resulting string size (excluding the terminating null byte)
> >> + * @param[in] attr
> >> + *   Flow rule attributes.
> >> + * @param[in] pattern
> >> + *   Pattern specification (list terminated by the END pattern item).
> >> + * @param[in] actions
> >> + *   Associated actions (list terminated by the END action).
> >> + *
> >> + * @return
> >> + *   0 on success, a negative errno value otherwise
> >> + */
> >> +__rte_experimental
> >> +int
> >> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> +		 const struct rte_flow_attr *attr,
> >> +		 const struct rte_flow_item pattern[],
> >> +		 const struct rte_flow_action actions[]);
> >> +
> >>   #ifdef __cplusplus
> >>   }
> >>   #endif
> >> diff --git a/lib/ethdev/rte_flow_snprint.c b/lib/ethdev/rte_flow_snprint.c
> >> new file mode 100644 index 0000000000..513886528b
> >> --- /dev/null
> >> +++ b/lib/ethdev/rte_flow_snprint.c
> >> @@ -0,0 +1,1681 @@
> >> +/* SPDX-License-Identifier: BSD-3-Clause
> >> + *
> >> + * Copyright(c) 2021 Xilinx, Inc.
> >> + */
> >> +
> >> +#include <stdbool.h>
> >> +#include <stdint.h>
> >> +#include <string.h>
> >> +
> >> +#include <rte_common.h>
> >> +#include "rte_ethdev.h"
> >> +#include "rte_flow.h"
> >> +
> >> +static int
> >> +rte_flow_snprint_str(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> +		     const void *value_ptr)
> >> +{
> >> +	const char *str = value_ptr;
> >> +	size_t write_size_max;
> >> +	int retv;
> >> +
> >> +	write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> >> +	retv = snprintf(buf + *nb_chars_total, write_size_max, " %s", str);
> >> +	if (retv < 0)
> >> +		return -EFAULT;
> >> +
> >> +	*nb_chars_total += retv;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ether_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			    const void *value_ptr)
> >> +{
> >> +	const struct rte_ether_addr *ea = value_ptr;
> >> +	const uint8_t *ab = ea->addr_bytes;
> >> +	size_t write_size_max;
> >> +	int retv;
> >> +
> >> +	write_size_max = buf_size - RTE_MIN(buf_size, *nb_chars_total);
> >> +	retv = snprintf(buf + *nb_chars_total, write_size_max,
> >> +			" %02x:%02x:%02x:%02x:%02x:%02x",
> >> +			ab[0], ab[1], ab[2], ab[3], ab[4], ab[5]);
> >> +	if (retv < 0)
> >> +		return -EFAULT;
> >> +
> >> +	*nb_chars_total += retv;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ipv4_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   const void *value_ptr)
> >> +{
> >> +	char addr_str[INET_ADDRSTRLEN];
> >> +
> >> +	if (inet_ntop(AF_INET, value_ptr, addr_str, sizeof(addr_str)) ==
> >> NULL)
> >> +		return -EFAULT;
> >> +
> >> +	return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_ipv6_addr(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   const void *value_ptr)
> >> +{
> >> +	char addr_str[INET6_ADDRSTRLEN];
> >> +
> >> +	if (inet_ntop(AF_INET6, value_ptr, addr_str, sizeof(addr_str)) ==
> >> NULL)
> >> +		return -EFAULT;
> >> +
> >> +	return rte_flow_snprint_str(buf, buf_size, nb_chars_total, addr_str);
> >> +}
> >> +
> >> +#define SNPRINT(_type, _fmt) \
> >> +	do {								\
> >> +		const _type *vp = value_ptr;				\
> >> +		size_t write_size_max;					\
> >> +		int retv;						\
> >> +									\
> >> +		write_size_max = buf_size -				\
> >> +				 RTE_MIN(buf_size, *nb_chars_total);	\
> >> +		retv = snprintf(buf + *nb_chars_total, write_size_max,
> >> 	\
> >> +				_fmt, *vp);				\
> >> +		if (retv < 0)						\
> >> +			return -EFAULT;
> >> 	\
> >> +									\
> >> +		*nb_chars_total += retv;				\
> >> +									\
> >> +		return 0;						\
> >> +	} while (0)
> >> +
> >> +static int
> >> +rte_flow_snprint_uint32(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +			const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint32_t, " %u");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex32(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		       const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint32_t, " 0x%08x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex24(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		       const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint32_t, " 0x%06x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex20(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		       const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint32_t, " 0x%05x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint16(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +			const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint16_t, " %hu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint16_be2cpu(char *buf, size_t buf_size,
> >> +			       size_t *nb_chars_total, const void *value_ptr) {
> >> +	const uint16_t *valuep = value_ptr;
> >> +	uint16_t value = rte_be_to_cpu_16(*valuep);
> >> +
> >> +	value_ptr = &value;
> >> +
> >> +	SNPRINT(uint16_t, " %hu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex16_be2cpu(char *buf, size_t buf_size,
> >> +			      size_t *nb_chars_total, const void *value_ptr) {
> >> +	const uint16_t *valuep = value_ptr;
> >> +	uint16_t value = rte_be_to_cpu_16(*valuep);
> >> +
> >> +	value_ptr = &value;
> >> +
> >> +	SNPRINT(uint16_t, " 0x%04x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_uint8(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		       const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint8_t, " %hhu");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_hex8(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		      const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint8_t, " 0x%02x");
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_byte(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		      const void *value_ptr)
> >> +{
> >> +	SNPRINT(uint8_t, "%02x");
> >> +}
> >> +
> >> +#undef SNPRINT
> >> +
> >> +static int
> >> +rte_flow_snprint_attr(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		      const struct rte_flow_attr *attr) {
> >> +	int rc;
> >> +
> >> +	if (attr == NULL)
> >> +		return 0;
> >> +
> >> +	if (attr->group != 0) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "group");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> +					     &attr->group);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	if (attr->priority != 0) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "priority");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> +					     &attr->priority);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	if (attr->transfer) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "transfer");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	if (attr->ingress) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "ingress");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	if (attr->egress) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "egress");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static void
> >> +rte_flow_item_init_parse(const struct rte_flow_item *item, size_t
> >> item_size,
> >> +			 void *spec, void *last, void *mask) {
> >> +	if (item->spec != NULL)
> >> +		memcpy(spec, item->spec, item_size);
> >> +	else
> >> +		memset(spec, 0, item_size);
> >> +
> >> +	if (item->last != NULL)
> >> +		memcpy(last, item->last, item_size);
> >> +	else
> >> +		memset(last, 0, item_size);
> >> +
> >> +	if (item->mask != NULL)
> >> +		memcpy(mask, item->mask, item_size);
> >> +	else
> >> +		memset(mask, 0, item_size);
> >> +}
> >> +
> >> +static bool
> >> +rte_flow_buf_is_all_zeros(const void *buf_ptr, size_t buf_size) {
> >> +	const uint8_t *buf = buf_ptr;
> >> +	unsigned int i;
> >> +	uint8_t t = 0;
> >> +
> >> +	for (i = 0; i < buf_size; ++i)
> >> +		t |= buf[i];
> >> +
> >> +	return (t == 0);
> >> +}
> >> +
> >> +static bool
> >> +rte_flow_buf_is_all_ones(const void *buf_ptr, size_t buf_size) {
> >> +	const uint8_t *buf = buf_ptr;
> >> +	unsigned int i;
> >> +	uint8_t t = ~0;
> >> +
> >> +	for (i = 0; i < buf_size; ++i)
> >> +		t &= buf[i];
> >> +
> >> +	return (t == (uint8_t)(~0));
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_field(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			    int (*value_dump_cb)(char *, size_t, size_t *,
> >> +						 const void *),
> >> +			    int (*mask_dump_cb)(char *, size_t, size_t *,
> >> +						const void *),
> >> +			    const char *field_name, size_t field_size,
> >> +			    void *field_spec, void *field_last,
> >> +			    void *field_mask, void *field_full_mask) {
> >> +	bool mask_is_all_ones;
> >> +	bool last_is_futile;
> >> +	int rc;
> >> +
> >> +	if (rte_flow_buf_is_all_zeros(field_mask, field_size))
> >> +		return 0;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> field_name);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (field_full_mask != NULL) {
> >> +		mask_is_all_ones = (memcmp(field_mask, field_full_mask,
> >> +					   field_size) == 0);
> >> +	} else {
> >> +		mask_is_all_ones = rte_flow_buf_is_all_ones(field_mask,
> >> +							    field_size);
> >> +	}
> >> +	last_is_futile = rte_flow_buf_is_all_zeros(field_last, field_size) ||
> >> +			 (memcmp(field_spec, field_last, field_size) == 0);
> >> +
> >> +	if (mask_is_all_ones && last_is_futile) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "is");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = value_dump_cb(buf, buf_size, nb_chars_total,
> >> field_spec);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		goto done;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "spec");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = value_dump_cb(buf, buf_size, nb_chars_total, field_spec);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (!last_is_futile) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  field_name);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "last");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = value_dump_cb(buf, buf_size, nb_chars_total,
> >> field_last);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> field_name);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "mask");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = mask_dump_cb(buf, buf_size, nb_chars_total, field_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +done:
> >> +	/*
> >> +	 * Zeroise the printed field. When all item fields have been printed,
> >> +	 * the corresponding item handler will make sure that the whole item
> >> +	 * mask is all-zeros. This is needed to highlight unsupported fields.
> >> +	 *
> >> +	 * If the provided field mask pointer refers to a separate container
> >> +	 * rather than to the field in the item mask directly, it's the duty
> >> +	 * of the item handler to clear the field in the item mask correctly.
> >> +	 */
> >> +	memset(field_mask, 0, field_size);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vf(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			 void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_vf *spec = spec_ptr;
> >> +	struct rte_flow_item_vf *last = last_ptr;
> >> +	struct rte_flow_item_vf *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex32, "id",
> >> +					 sizeof(spec->id), &spec->id, &last-
> >>> id,
> >> +					 &mask->id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_phy_port(char *buf, size_t buf_size,
> >> +			       size_t *nb_chars_total, void *spec_ptr,
> >> +			       void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_phy_port *spec = spec_ptr;
> >> +	struct rte_flow_item_phy_port *last = last_ptr;
> >> +	struct rte_flow_item_phy_port *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex32, "index",
> >> +					 sizeof(spec->index), &spec->index,
> >> +					 &last->index, &mask->index, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_port_id(char *buf, size_t buf_size,
> >> +			      size_t *nb_chars_total, void *spec_ptr,
> >> +			      void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_port_id *spec = spec_ptr;
> >> +	struct rte_flow_item_port_id *last = last_ptr;
> >> +	struct rte_flow_item_port_id *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex32, "id",
> >> +					 sizeof(spec->id), &spec->id, &last-
> >>> id,
> >> +					 &mask->id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_eth(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			  void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_eth *spec = spec_ptr;
> >> +	struct rte_flow_item_eth *last = last_ptr;
> >> +	struct rte_flow_item_eth *mask = mask_ptr;
> >> +	uint8_t has_vlan_full_mask = 1;
> >> +	uint8_t has_vlan_spec;
> >> +	uint8_t has_vlan_last;
> >> +	uint8_t has_vlan_mask;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ether_addr,
> >> +					 rte_flow_snprint_ether_addr, "dst",
> >> +					 sizeof(spec->hdr.d_addr),
> >> +					 &spec->hdr.d_addr, &last-
> >>> hdr.d_addr,
> >> +					 &mask->hdr.d_addr, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ether_addr,
> >> +					 rte_flow_snprint_ether_addr, "src",
> >> +					 sizeof(spec->hdr.s_addr),
> >> +					 &spec->hdr.s_addr, &last-
> >>> hdr.s_addr,
> >> +					 &mask->hdr.s_addr, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "type",
> >> +					 sizeof(spec->hdr.ether_type),
> >> +					 &spec->hdr.ether_type,
> >> +					 &last->hdr.ether_type,
> >> +					 &mask->hdr.ether_type, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	has_vlan_spec = spec->has_vlan;
> >> +	has_vlan_last = last->has_vlan;
> >> +	has_vlan_mask = mask->has_vlan;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_uint8, "has_vlan",
> >> +					 sizeof(has_vlan_spec),
> >> &has_vlan_spec,
> >> +					 &has_vlan_last, &has_vlan_mask,
> >> +					 &has_vlan_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	mask->has_vlan = 0;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vlan(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_vlan *spec = spec_ptr;
> >> +	struct rte_flow_item_vlan *last = last_ptr;
> >> +	struct rte_flow_item_vlan *mask = mask_ptr;
> >> +	uint8_t has_more_vlan_full_mask = 1;
> >> +	uint8_t has_more_vlan_spec;
> >> +	uint8_t has_more_vlan_last;
> >> +	uint8_t has_more_vlan_mask;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "tci",
> >> +					 sizeof(spec->hdr.vlan_tci),
> >> +					 &spec->hdr.vlan_tci,
> >> +					 &last->hdr.vlan_tci,
> >> +					 &mask->hdr.vlan_tci, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 "inner_type",
> >> +					 sizeof(spec->hdr.eth_proto),
> >> +					 &spec->hdr.eth_proto,
> >> +					 &last->hdr.eth_proto,
> >> +					 &mask->hdr.eth_proto, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	has_more_vlan_spec = spec->has_more_vlan;
> >> +	has_more_vlan_last = last->has_more_vlan;
> >> +	has_more_vlan_mask = mask->has_more_vlan;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_uint8,
> >> +					 "has_more_vlan",
> >> +					 sizeof(has_more_vlan_spec),
> >> +					 &has_more_vlan_spec,
> >> +					 &has_more_vlan_last,
> >> +					 &has_more_vlan_mask,
> >> +					 &has_more_vlan_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	mask->has_more_vlan = 0;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_ipv4(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_ipv4 *spec = spec_ptr;
> >> +	struct rte_flow_item_ipv4 *last = last_ptr;
> >> +	struct rte_flow_item_ipv4 *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex8,
> >> +					 rte_flow_snprint_hex8, "tos",
> >> +					 sizeof(spec->hdr.type_of_service),
> >> +					 &spec->hdr.type_of_service,
> >> +					 &last->hdr.type_of_service,
> >> +					 &mask->hdr.type_of_service,
> >> NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 "packet_id",
> >> +					 sizeof(spec->hdr.packet_id),
> >> +					 &spec->hdr.packet_id,
> >> +					 &last->hdr.packet_id,
> >> +					 &mask->hdr.packet_id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 "fragment_offset",
> >> +					 sizeof(spec->hdr.fragment_offset),
> >> +					 &spec->hdr.fragment_offset,
> >> +					 &last->hdr.fragment_offset,
> >> +					 &mask->hdr.fragment_offset,
> >> NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_hex8, "ttl",
> >> +					 sizeof(spec->hdr.time_to_live),
> >> +					 &spec->hdr.time_to_live,
> >> +					 &last->hdr.time_to_live,
> >> +					 &mask->hdr.time_to_live, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_hex8, "proto",
> >> +					 sizeof(spec->hdr.next_proto_id),
> >> +					 &spec->hdr.next_proto_id,
> >> +					 &last->hdr.next_proto_id,
> >> +					 &mask->hdr.next_proto_id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ipv4_addr,
> >> +					 rte_flow_snprint_ipv4_addr, "src",
> >> +					 sizeof(spec->hdr.src_addr),
> >> +					 &spec->hdr.src_addr,
> >> +					 &last->hdr.src_addr,
> >> +					 &mask->hdr.src_addr, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ipv4_addr,
> >> +					 rte_flow_snprint_ipv4_addr, "dst",
> >> +					 sizeof(spec->hdr.dst_addr),
> >> +					 &spec->hdr.dst_addr,
> >> +					 &last->hdr.dst_addr,
> >> +					 &mask->hdr.dst_addr, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_ipv6(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	uint32_t tc_full_mask = (RTE_IPV6_HDR_TC_MASK >>
> >> RTE_IPV6_HDR_TC_SHIFT);
> >> +	uint32_t fl_full_mask = (RTE_IPV6_HDR_FL_MASK >>
> >> RTE_IPV6_HDR_FL_SHIFT);
> >> +	struct rte_flow_item_ipv6 *spec = spec_ptr;
> >> +	struct rte_flow_item_ipv6 *last = last_ptr;
> >> +	struct rte_flow_item_ipv6 *mask = mask_ptr;
> >> +	uint8_t has_frag_ext_full_mask = 1;
> >> +	uint8_t has_frag_ext_spec;
> >> +	uint8_t has_frag_ext_last;
> >> +	uint8_t has_frag_ext_mask;
> >> +	uint32_t vtc_flow;
> >> +	uint32_t fl_spec;
> >> +	uint32_t fl_last;
> >> +	uint32_t fl_mask;
> >> +	uint32_t tc_spec;
> >> +	uint32_t tc_last;
> >> +	uint32_t tc_mask;
> >> +	int rc;
> >> +
> >> +	vtc_flow = rte_be_to_cpu_32(spec->hdr.vtc_flow);
> >> +	tc_spec = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> +	fl_spec = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> +	vtc_flow = rte_be_to_cpu_32(last->hdr.vtc_flow);
> >> +	tc_last = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> +	fl_last = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> +	vtc_flow = rte_be_to_cpu_32(mask->hdr.vtc_flow);
> >> +	tc_mask = (vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
> >> RTE_IPV6_HDR_TC_SHIFT;
> >> +	fl_mask = (vtc_flow & RTE_IPV6_HDR_FL_MASK) >>
> >> RTE_IPV6_HDR_FL_SHIFT;
> >> +
> >> +	mask->hdr.vtc_flow &=
> >> ~rte_cpu_to_be_32(RTE_IPV6_HDR_TC_MASK |
> >> +						RTE_IPV6_HDR_FL_MASK);
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex8,
> >> +					 rte_flow_snprint_hex8, "tc",
> >> +					 sizeof(tc_spec), &tc_spec, &tc_last,
> >> +					 &tc_mask, &tc_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex20, "flow",
> >> +					 sizeof(fl_spec), &fl_spec, &fl_last,
> >> +					 &fl_mask, &fl_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_hex8, "proto",
> >> +					 sizeof(spec->hdr.proto),
> >> +					 &spec->hdr.proto,
> >> +					 &last->hdr.proto,
> >> +					 &mask->hdr.proto, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_hex8, "hop",
> >> +					 sizeof(spec->hdr.hop_limits),
> >> +					 &spec->hdr.hop_limits,
> >> +					 &last->hdr.hop_limits,
> >> +					 &mask->hdr.hop_limits, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ipv6_addr,
> >> +					 rte_flow_snprint_ipv6_addr, "src",
> >> +					 sizeof(spec->hdr.src_addr),
> >> +					 &spec->hdr.src_addr,
> >> +					 &last->hdr.src_addr,
> >> +					 &mask->hdr.src_addr, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_ipv6_addr,
> >> +					 rte_flow_snprint_ipv6_addr, "dst",
> >> +					 sizeof(spec->hdr.dst_addr),
> >> +					 &spec->hdr.dst_addr,
> >> +					 &last->hdr.dst_addr,
> >> +					 &mask->hdr.dst_addr, NULL);
> >> +
> >> +	has_frag_ext_spec = spec->has_frag_ext;
> >> +	has_frag_ext_last = last->has_frag_ext;
> >> +	has_frag_ext_mask = mask->has_frag_ext;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint8,
> >> +					 rte_flow_snprint_uint8,
> >> "has_frag_ext",
> >> +					 sizeof(has_frag_ext_spec),
> >> +					 &has_frag_ext_spec,
> >> &has_frag_ext_last,
> >> +					 &has_frag_ext_mask,
> >> +					 &has_frag_ext_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	mask->has_frag_ext = 0;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_udp(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			  void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_udp *spec = spec_ptr;
> >> +	struct rte_flow_item_udp *last = last_ptr;
> >> +	struct rte_flow_item_udp *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "src",
> >> +					 sizeof(spec->hdr.src_port),
> >> +					 &spec->hdr.src_port,
> >> +					 &last->hdr.src_port,
> >> +					 &mask->hdr.src_port, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "dst",
> >> +					 sizeof(spec->hdr.dst_port),
> >> +					 &spec->hdr.dst_port,
> >> +					 &last->hdr.dst_port,
> >> +					 &mask->hdr.dst_port, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_tcp(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			  void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_tcp *spec = spec_ptr;
> >> +	struct rte_flow_item_tcp *last = last_ptr;
> >> +	struct rte_flow_item_tcp *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "src",
> >> +					 sizeof(spec->hdr.src_port),
> >> +					 &spec->hdr.src_port,
> >> +					 &last->hdr.src_port,
> >> +					 &mask->hdr.src_port, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> "dst",
> >> +					 sizeof(spec->hdr.dst_port),
> >> +					 &spec->hdr.dst_port,
> >> +					 &last->hdr.dst_port,
> >> +					 &mask->hdr.dst_port, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex8,
> >> +					 rte_flow_snprint_hex8, "flags",
> >> +					 sizeof(spec->hdr.tcp_flags),
> >> +					 &spec->hdr.tcp_flags,
> >> +					 &last->hdr.tcp_flags,
> >> +					 &mask->hdr.tcp_flags, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_vxlan(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			    void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_vxlan *spec = spec_ptr;
> >> +	struct rte_flow_item_vxlan *last = last_ptr;
> >> +	struct rte_flow_item_vxlan *mask = mask_ptr;
> >> +	uint32_t vni_full_mask = 0xffffff;
> >> +	uint32_t vni_spec;
> >> +	uint32_t vni_last;
> >> +	uint32_t vni_mask;
> >> +	int rc;
> >> +
> >> +	vni_spec = rte_be_to_cpu_32(spec->hdr.vx_vni) >> 8;
> >> +	vni_last = rte_be_to_cpu_32(last->hdr.vx_vni) >> 8;
> >> +	vni_mask = rte_be_to_cpu_32(mask->hdr.vx_vni) >> 8;
> >> +
> >> +	mask->hdr.vx_vni &= ~RTE_BE32(0xffffff00);
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex24, "vni",
> >> +					 sizeof(vni_spec), &vni_spec,
> >> +					 &vni_last, &vni_mask,
> >> +					 &vni_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_nvgre(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			    void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_nvgre *spec = spec_ptr;
> >> +	struct rte_flow_item_nvgre *last = last_ptr;
> >> +	struct rte_flow_item_nvgre *mask = mask_ptr;
> >> +	uint32_t *tni_and_flow_id_specp = (uint32_t *)spec->tni;
> >> +	uint32_t *tni_and_flow_id_lastp = (uint32_t *)last->tni;
> >> +	uint32_t *tni_and_flow_id_maskp = (uint32_t *)mask->tni;
> >> +	uint32_t tni_full_mask = 0xffffff;
> >> +	uint32_t tni_spec;
> >> +	uint32_t tni_last;
> >> +	uint32_t tni_mask;
> >> +	int rc;
> >> +
> >> +	tni_spec = rte_be_to_cpu_32(*tni_and_flow_id_specp) >> 8;
> >> +	tni_last = rte_be_to_cpu_32(*tni_and_flow_id_lastp) >> 8;
> >> +	tni_mask = rte_be_to_cpu_32(*tni_and_flow_id_maskp) >> 8;
> >> +
> >> +	memset(mask->tni, 0, sizeof(mask->tni));
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex24, "tni",
> >> +					 sizeof(tni_spec), &tni_spec,
> >> +					 &tni_last, &tni_mask,
> >> +					 &tni_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_geneve(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			     void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_geneve *spec = spec_ptr;
> >> +	struct rte_flow_item_geneve *last = last_ptr;
> >> +	struct rte_flow_item_geneve *mask = mask_ptr;
> >> +	uint32_t *vni_and_rsvd_specp = (uint32_t *)spec->vni;
> >> +	uint32_t *vni_and_rsvd_lastp = (uint32_t *)last->vni;
> >> +	uint32_t *vni_and_rsvd_maskp = (uint32_t *)mask->vni;
> >> +	uint32_t vni_full_mask = 0xffffff;
> >> +	uint16_t optlen_full_mask = 0x3f;
> >> +	uint16_t optlen_spec;
> >> +	uint16_t optlen_last;
> >> +	uint16_t optlen_mask;
> >> +	uint32_t vni_spec;
> >> +	uint32_t vni_last;
> >> +	uint32_t vni_mask;
> >> +	int rc;
> >> +
> >> +	optlen_spec = rte_be_to_cpu_16(spec->ver_opt_len_o_c_rsvd0) &
> >> 0x3f00;
> >> +	optlen_spec >>= 8;
> >> +
> >> +	optlen_last = rte_be_to_cpu_16(last->ver_opt_len_o_c_rsvd0) &
> >> 0x3f00;
> >> +	optlen_last >>= 8;
> >> +
> >> +	optlen_mask = rte_be_to_cpu_16(mask->ver_opt_len_o_c_rsvd0)
> >> & 0x3f00;
> >> +	optlen_mask >>= 8;
> >> +
> >> +	mask->ver_opt_len_o_c_rsvd0 &= ~RTE_BE16(0x3f00);
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16,
> >> +					 rte_flow_snprint_hex8, "optlen",
> >> +					 sizeof(optlen_spec), &optlen_spec,
> >> +					 &optlen_last, &optlen_mask,
> >> +					 &optlen_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 "protocol", sizeof(spec->protocol),
> >> +					 &spec->protocol, &last->protocol,
> >> +					 &mask->protocol, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	vni_spec = rte_be_to_cpu_32(*vni_and_rsvd_specp) >> 8;
> >> +	vni_last = rte_be_to_cpu_32(*vni_and_rsvd_lastp) >> 8;
> >> +	vni_mask = rte_be_to_cpu_32(*vni_and_rsvd_maskp) >> 8;
> >> +
> >> +	memset(mask->vni, 0, sizeof(mask->vni));
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex24, "vni",
> >> +					 sizeof(vni_spec), &vni_spec,
> >> +					 &vni_last, &vni_mask,
> >> +					 &vni_full_mask);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_mark(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_mark *spec = spec_ptr;
> >> +	struct rte_flow_item_mark *last = last_ptr;
> >> +	struct rte_flow_item_mark *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint32,
> >> +					 rte_flow_snprint_hex32, "id",
> >> +					 sizeof(spec->id), &spec->id,
> >> +					 &last->id, &mask->id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_item_pppoed(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			     void *spec_ptr, void *last_ptr, void *mask_ptr) {
> >> +	struct rte_flow_item_pppoe *spec = spec_ptr;
> >> +	struct rte_flow_item_pppoe *last = last_ptr;
> >> +	struct rte_flow_item_pppoe *mask = mask_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_item_field(buf, buf_size, nb_chars_total,
> >> +					 rte_flow_snprint_uint16_be2cpu,
> >> +					 rte_flow_snprint_hex16_be2cpu,
> >> +					 "seid", sizeof(spec->session_id),
> >> +					 &spec->session_id, &last-
> >>> session_id,
> >> +					 &mask->session_id, NULL);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static const struct {
> >> +	const char *name;
> >> +	int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_char_total,
> >> +			void *spec_ptr, void *last_ptr, void *mask_ptr);
> >> +	size_t size;
> >> +} item_table[] = {
> >> +	[RTE_FLOW_ITEM_TYPE_VOID] = {
> >> +		.name = "void"
> >> +	},
> >> +	[RTE_FLOW_ITEM_TYPE_PF] = {
> >> +		.name = "pf"
> >> +	},
> >> +	[RTE_FLOW_ITEM_TYPE_PPPOES] = {
> >> +		.name = "pppoes"
> >> +	},
> >> +	[RTE_FLOW_ITEM_TYPE_PPPOED] = {
> >> +		.name = "pppoed",
> >> +		.parse_cb = rte_flow_snprint_item_pppoed,
> >> +		.size = sizeof(struct rte_flow_item_pppoe)
> >> +	},
> >> +
> >> +#define ITEM(_name_uppercase, _name_lowercase) \
> >> +	[RTE_FLOW_ITEM_TYPE_##_name_uppercase] = {
> >> 	\
> >> +		.name = #_name_lowercase,				\
> >> +		.parse_cb = rte_flow_snprint_item_##_name_lowercase,
> >> 	\
> >> +		.size = sizeof(struct rte_flow_item_##_name_lowercase)
> >> 	\
> >> +	}
> >> +
> >> +	ITEM(VF, vf),
> >> +	ITEM(PHY_PORT, phy_port),
> >> +	ITEM(PORT_ID, port_id),
> >> +	ITEM(ETH, eth),
> >> +	ITEM(VLAN, vlan),
> >> +	ITEM(IPV4, ipv4),
> >> +	ITEM(IPV6, ipv6),
> >> +	ITEM(UDP, udp),
> >> +	ITEM(TCP, tcp),
> >> +	ITEM(VXLAN, vxlan),
> >> +	ITEM(NVGRE, nvgre),
> >> +	ITEM(GENEVE, geneve),
> >> +	ITEM(MARK, mark),
> >> +
> >> +#undef ITEM
> >> +};
> >> +
> >> +static int
> >> +rte_flow_snprint_item(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +		      const struct rte_flow_item *item) {
> >> +	int rc;
> >> +
> >> +	if (item->type < 0 || item->type >= RTE_DIM(item_table) ||
> >> +	    item_table[item->type].name == NULL) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "{unknown}");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		goto out;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +				  item_table[item->type].name);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (item_table[item->type].parse_cb != NULL) {
> >> +		size_t item_size = item_table[item->type].size;
> >> +		uint8_t spec[item_size];
> >> +		uint8_t last[item_size];
> >> +		uint8_t mask[item_size];
> >> +
> >> +		rte_flow_item_init_parse(item, item_size, spec, last, mask);
> >> +
> >> +		rc = item_table[item->type].parse_cb(buf, buf_size,
> >> +						     nb_chars_total,
> >> +						     spec, last, mask);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		if (!rte_flow_buf_is_all_zeros(mask, item_size)) {
> >> +			rc = rte_flow_snprint_str(buf, buf_size,
> >> +						    nb_chars_total,
> >> +						    "{unknown bits}");
> >> +			if (rc != 0)
> >> +				return rc;
> >> +		}
> >> +	}
> >> +
> >> +out:
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_pattern(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			 const struct rte_flow_item pattern[]) {
> >> +	const struct rte_flow_item *item;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "pattern");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (pattern == NULL)
> >> +		goto end;
> >> +
> >> +	for (item = pattern;
> >> +	     item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
> >> +		rc = rte_flow_snprint_item(buf, buf_size, nb_chars_total,
> >> item);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +end:
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_jump(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			     const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_jump *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "group");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> +				     &conf->group);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_mark(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			     const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_mark *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_queue(char *buf, size_t buf_size,
> >> +			      size_t *nb_chars_total, const void *conf_ptr) {
> >> +	const struct rte_flow_action_queue *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> >> +				     &conf->index);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_count(char *buf, size_t buf_size,
> >> +			      size_t *nb_chars_total, const void *conf_ptr) {
> >> +	const struct rte_flow_action_count *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "identifier");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (conf->shared) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "shared");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_func(char *buf, size_t buf_size,
> >> +				 size_t *nb_chars_total,
> >> +				 enum rte_eth_hash_function func)
> >> +{
> >> +	int rc;
> >> +
> >> +	if (func == RTE_ETH_HASH_FUNCTION_DEFAULT)
> >> +		return 0;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "func");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	switch (func) {
> >> +	case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "toeplitz");
> >> +		break;
> >> +	case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "simple_xor");
> >> +		break;
> >> +	case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "symmetric_toeplitz");
> >> +		break;
> >> +	default:
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "{unknown}");
> >> +		break;
> >> +	}
> >> +
> >> +	return rc;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_level(char *buf, size_t buf_size,
> >> +				  size_t *nb_chars_total, uint32_t level) {
> >> +	int rc;
> >> +
> >> +	if (level == 0)
> >> +		return 0;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "level");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &level);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static const struct {
> >> +	const char *name;
> >> +	uint64_t flag;
> >> +} rss_type_table[] = {
> >> +	{ "ipv4", ETH_RSS_IPV4 },
> >> +	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
> >> +	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
> >> +	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
> >> +	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
> >> +	{ "ipv6", ETH_RSS_IPV6 },
> >> +	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
> >> +	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
> >> +	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
> >> +	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
> >> +	{ "ipv6-ex", ETH_RSS_IPV6_EX },
> >> +	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
> >> +	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
> >> +
> >> +	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
> >> +	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
> >> +	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
> >> +	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY }, };
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_types(char *buf, size_t buf_size,
> >> +				  size_t *nb_chars_total, uint64_t types) {
> >> +	unsigned int i;
> >> +	int rc;
> >> +
> >> +	if (types == 0)
> >> +		return 0;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "types");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	for (i = 0; i < RTE_DIM(rss_type_table); ++i) {
> >> +		uint64_t flag = rss_type_table[i].flag;
> >> +
> >> +		if ((types & flag) == 0)
> >> +			continue;
> >> +
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  rss_type_table[i].name);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		types &= ~flag;
> >> +	}
> >> +
> >> +	if (types != 0) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "{unknown}");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss_queues(char *buf, size_t buf_size,
> >> +				   size_t *nb_chars_total,
> >> +				   const uint16_t *queues,
> >> +				   unsigned int nb_queues)
> >> +{
> >> +	unsigned int i;
> >> +	int rc;
> >> +
> >> +	if (nb_queues == 0)
> >> +		return 0;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "queues");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	for (i = 0; i < nb_queues; ++i) {
> >> +		rc = rte_flow_snprint_uint16(buf, buf_size, nb_chars_total,
> >> +					     &queues[i]);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_rss(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			    const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_rss *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_action_rss_func(buf, buf_size, nb_chars_total,
> >> +					      conf->func);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_action_rss_level(buf, buf_size,
> >> nb_chars_total,
> >> +					       conf->level);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_action_rss_types(buf, buf_size,
> >> nb_chars_total,
> >> +					       conf->types);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (conf->key_len != 0) {
> >> +		if (conf->key != NULL) {
> >> +			unsigned int i;
> >> +
> >> +			rc = rte_flow_snprint_str(buf, buf_size,
> >> nb_chars_total,
> >> +						  "" /* results in space */);
> >> +			if (rc != 0)
> >> +				return rc;
> >> +
> >> +			for (i = 0; i < conf->key_len; ++i) {
> >> +				rc = rte_flow_snprint_byte(buf, buf_size,
> >> +							   nb_chars_total,
> >> +							   &conf->key[i]);
> >> +				if (rc != 0)
> >> +					return rc;
> >> +			}
> >> +		}
> >> +
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "key_len");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> +					     &conf->key_len);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_action_rss_queues(buf, buf_size,
> >> nb_chars_total,
> >> +						conf->queue, conf-
> >>> queue_num);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_vf(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			   const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_vf *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	if (conf->original) {
> >> +		return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					    "original on");
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_phy_port(char *buf, size_t buf_size,
> >> +				 size_t *nb_chars_total, const void
> >> *conf_ptr) {
> >> +	const struct rte_flow_action_phy_port *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	if (conf->original) {
> >> +		return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					    "original on");
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "index");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total,
> >> +				     &conf->index);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_port_id(char *buf, size_t buf_size,
> >> +				size_t *nb_chars_total, const void *conf_ptr)
> >> {
> >> +	const struct rte_flow_action_port_id *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	if (conf->original) {
> >> +		return rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					    "original on");
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "id");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint32(buf, buf_size, nb_chars_total, &conf-
> >>> id);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_push_vlan(char *buf, size_t buf_size,
> >> +				     size_t *nb_chars_total,
> >> +				     const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_of_push_vlan *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> "ethertype");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_hex16_be2cpu(buf, buf_size, nb_chars_total,
> >> +					   &conf->ethertype);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_set_vlan_vid(char *buf, size_t buf_size,
> >> +					size_t *nb_chars_total,
> >> +					const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_of_set_vlan_vid *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_vid");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint16_be2cpu(buf, buf_size, nb_chars_total,
> >> +					    &conf->vlan_vid);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_action_of_set_vlan_pcp(char *buf, size_t buf_size,
> >> +					size_t *nb_chars_total,
> >> +					const void *conf_ptr)
> >> +{
> >> +	const struct rte_flow_action_of_set_vlan_pcp *conf = conf_ptr;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "vlan_pcp");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_uint8(buf, buf_size, nb_chars_total,
> >> +				    &conf->vlan_pcp);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static const struct {
> >> +	const char *name;
> >> +	int (*parse_cb)(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> +			const void *conf_ptr);
> >> +} action_table[] = {
> >> +	[RTE_FLOW_ACTION_TYPE_VOID] = {
> >> +		.name = "void"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_FLAG] = {
> >> +		.name = "flag"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_DROP] = {
> >> +		.name = "drop"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_PF] = {
> >> +		.name = "pf"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
> >> +		.name = "of_pop_vlan"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
> >> +		.name = "vxlan_encap"
> >> +	},
> >> +	[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
> >> +		.name = "vxlan_decap"
> >> +	},
> >> +
> >> +#define ACTION(_name_uppercase, _name_lowercase) \
> >> +	[RTE_FLOW_ACTION_TYPE_##_name_uppercase] = {
> >> 	\
> >> +		.name = #_name_lowercase,				\
> >> +		.parse_cb = rte_flow_snprint_action_##_name_lowercase,
> >> 	\
> >> +	}
> >> +
> >> +	ACTION(JUMP, jump),
> >> +	ACTION(MARK, mark),
> >> +	ACTION(QUEUE, queue),
> >> +	ACTION(COUNT, count),
> >> +	ACTION(RSS, rss),
> >> +	ACTION(VF, vf),
> >> +	ACTION(PHY_PORT, phy_port),
> >> +	ACTION(PORT_ID, port_id),
> >> +	ACTION(OF_PUSH_VLAN, of_push_vlan),
> >> +	ACTION(OF_SET_VLAN_VID, of_set_vlan_vid),
> >> +	ACTION(OF_SET_VLAN_PCP, of_set_vlan_pcp),
> >> +
> >> +#undef ACTION
> >> +};
> >> +
> >> +static int
> >> +rte_flow_snprint_action(char *buf, size_t buf_size, size_t
> *nb_chars_total,
> >> +			const struct rte_flow_action *action) {
> >> +	int rc;
> >> +
> >> +	if (action->type < 0 || action->type >= RTE_DIM(action_table) ||
> >> +	    action_table[action->type].name == NULL) {
> >> +		rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +					  "{unknown}");
> >> +		if (rc != 0)
> >> +			return rc;
> >> +
> >> +		goto out;
> >> +	}
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total,
> >> +				  action_table[action->type].name);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (action_table[action->type].parse_cb != NULL &&
> >> +	    action->conf != NULL) {
> >> +		rc = action_table[action->type].parse_cb(buf, buf_size,
> >> +							 nb_chars_total,
> >> +							 action->conf);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +out:
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "/");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +rte_flow_snprint_actions(char *buf, size_t buf_size, size_t
> >> *nb_chars_total,
> >> +			 const struct rte_flow_action actions[]) {
> >> +	const struct rte_flow_action *action;
> >> +	int rc;
> >> +
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "actions");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	if (actions == NULL)
> >> +		goto end;
> >> +
> >> +	for (action = actions;
> >> +	     action->type != RTE_FLOW_ACTION_TYPE_END; ++action) {
> >> +		rc = rte_flow_snprint_action(buf, buf_size, nb_chars_total,
> >> +					     action);
> >> +		if (rc != 0)
> >> +			return rc;
> >> +	}
> >> +
> >> +end:
> >> +	rc = rte_flow_snprint_str(buf, buf_size, nb_chars_total, "end");
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +int
> >> +rte_flow_snprint(char *buf, size_t buf_size, size_t *nb_chars_total,
> >> +		 const struct rte_flow_attr *attr,
> >> +		 const struct rte_flow_item pattern[],
> >> +		 const struct rte_flow_action actions[]) {
> >> +	int rc;
> >> +
> >> +	if (buf == NULL && buf_size != 0)
> >> +		return -EINVAL;
> >> +
> >> +	*nb_chars_total = 0;
> >> +
> >> +	rc = rte_flow_snprint_attr(buf, buf_size, nb_chars_total, attr);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_pattern(buf, buf_size, nb_chars_total,
> >> pattern);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	rc = rte_flow_snprint_actions(buf, buf_size, nb_chars_total,
> >> actions);
> >> +	if (rc != 0)
> >> +		return rc;
> >> +
> >> +	return 0;
> >> +}
> >> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> >> 44d30b05ae..a626cac944 100644
> >> --- a/lib/ethdev/version.map
> >> +++ b/lib/ethdev/version.map
> >> @@ -249,6 +249,9 @@ EXPERIMENTAL {
> >>   	rte_mtr_meter_policy_delete;
> >>   	rte_mtr_meter_policy_update;
> >>   	rte_mtr_meter_policy_validate;
> >> +
> >> +	# added in 21.08
> >> +	rte_flow_snprint;
> >>   };
> >>
> >>   INTERNAL {
> >> --
> >> 2.20.1
> 
> --
> Ivan M

  reply	other threads:[~2021-06-02 13:32 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-27  8:25 Ivan Malov
2021-05-30  7:27 ` Ori Kam
2021-05-31  2:28   ` Stephen Hemminger
2021-06-01 14:17     ` Ivan Malov
2021-06-01 15:10       ` Stephen Hemminger
2021-06-01 14:08   ` Ivan Malov
2021-06-02 13:32     ` Ori Kam [this message]
2021-06-02 13:49       ` Andrew Rybchenko
2021-06-03  8:25         ` Ori Kam
2021-06-03  8:43           ` Andrew Rybchenko
2021-06-03  9:27             ` Ori Kam
2023-03-23 12:54               ` Ferruh Yigit
2021-06-02 20:48   ` Stephen Hemminger
2021-06-14 12:42 ` Singh, Aman Deep
2021-06-17  8:11   ` Singh, Aman Deep
2021-07-02 10:20   ` Andrew Rybchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BL1PR12MB5335FE20DF4E61FE5AF9E696D63D9@BL1PR12MB5335.namprd12.prod.outlook.com \
    --to=orika@nvidia.com \
    --cc=Ivan.Malov@oktetlabs.ru \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=mdr@ashroe.eu \
    --cc=nhorman@tuxdriver.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).