DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW
@ 2020-01-14  3:45 Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 1/4] net/mlx5: support flow dump Api Xiaoyu Min
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Xiaoyu Min @ 2020-01-14  3:45 UTC (permalink / raw)
  To: viacheslavo, matan, rasland; +Cc: dev

This RFC intends to provide a way to dump all the offloaded rte flows
in HW. This is very helpful for user and developer to debug flow offloading
stuff, i.e, to check whether PMD offloads the rte flow in a correct way
from HW perspective.

A private PMD API is provided as well as a socket server for external tool
to trigger dump.
The output file is in raw format and Mellanox specific. An external vendor
(Mellanox) provided tool is needed in order to get human readable format.

Please be aware the underlying rdma-core lib also needs to support this.

Xueming Li (4):
  net/mlx5: support flow dump Api
  app/testpmd: new flow dump CLI
  net/mlx5: add socket server for external tools
  doc: update mlx5 document for flow dump feature

 app/test-pmd/Makefile                     |   4 +
 app/test-pmd/cmdline_flow.c               |  91 +++++++++
 app/test-pmd/config.c                     |  33 ++++
 app/test-pmd/meson.build                  |   3 +
 app/test-pmd/testpmd.h                    |   1 +
 doc/guides/nics/mlx5.rst                  |  28 +++
 drivers/net/mlx5/Makefile                 |  12 +-
 drivers/net/mlx5/meson.build              |   5 +
 drivers/net/mlx5/mlx5.c                   |   2 +
 drivers/net/mlx5/mlx5.h                   |   7 +
 drivers/net/mlx5/mlx5_devx_cmds.c         |  35 ++++
 drivers/net/mlx5/mlx5_glue.c              |  13 ++
 drivers/net/mlx5/mlx5_glue.h              |   1 +
 drivers/net/mlx5/mlx5_socket.c            | 226 ++++++++++++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5.c           |  22 +++
 drivers/net/mlx5/rte_pmd_mlx5.h           |  39 ++++
 drivers/net/mlx5/rte_pmd_mlx5_version.map |   7 +
 17 files changed, 528 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/mlx5/mlx5_socket.c
 create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.c
 create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.h

-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [RFC 1/4] net/mlx5: support flow dump Api
  2020-01-14  3:45 [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW Xiaoyu Min
@ 2020-01-14  3:45 ` Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI Xiaoyu Min
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Xiaoyu Min @ 2020-01-14  3:45 UTC (permalink / raw)
  To: viacheslavo, matan, rasland, Shahaf Shuler; +Cc: dev, Xueming Li

From: Xueming Li <xuemingl@mellanox.com>

Dump fdb/nic_rx/nic_tx raw flow data into specified file.

Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
---
 drivers/net/mlx5/Makefile                 | 11 ++++++-
 drivers/net/mlx5/meson.build              |  4 +++
 drivers/net/mlx5/mlx5.h                   |  2 ++
 drivers/net/mlx5/mlx5_devx_cmds.c         | 35 ++++++++++++++++++++
 drivers/net/mlx5/mlx5_glue.c              | 13 ++++++++
 drivers/net/mlx5/mlx5_glue.h              |  1 +
 drivers/net/mlx5/rte_pmd_mlx5.c           | 22 +++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5.h           | 39 +++++++++++++++++++++++
 drivers/net/mlx5/rte_pmd_mlx5_version.map |  7 ++++
 9 files changed, 133 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.c
 create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.h

diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index c5cf4397ac..0ff907445e 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -8,7 +8,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_pmd_mlx5.a
 LIB_GLUE = $(LIB_GLUE_BASE).$(LIB_GLUE_VERSION)
 LIB_GLUE_BASE = librte_pmd_mlx5_glue.so
-LIB_GLUE_VERSION = 19.08.0
+LIB_GLUE_VERSION = 19.11.0
 
 # Sources.
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5.c
@@ -39,6 +39,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_mp.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_nl.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_devx_cmds.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += rte_pmd_mlx5.c
+
+# Export include files of private API.
+SYMLINK-$(CONFIG_RTE_LIBRTE_MLX5_PMD)-include += rte_pmd_mlx5.h
 
 ifeq ($(CONFIG_RTE_IBVERBS_LINK_DLOPEN),y)
 INSTALL-$(CONFIG_RTE_LIBRTE_MLX5_PMD)-lib += $(LIB_GLUE)
@@ -203,6 +207,11 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-config-h.sh
 		infiniband/mlx5dv.h \
 		func mlx5dv_dr_action_create_flow_meter \
 		$(AUTOCONF_OUTPUT)
+	$Q sh -- '$<' '$@' \
+		HAVE_MLX5_DR_FLOW_DUMP \
+		infiniband/mlx5dv.h \
+		func mlx5dv_dump_dr_domain \
+		$(AUTOCONF_OUTPUT)
 	$Q sh -- '$<' '$@' \
 		HAVE_MLX5DV_MMAP_GET_NC_PAGES_CMD \
 		infiniband/mlx5dv.h \
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index d6b32db794..1bdebcb6fc 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -61,6 +61,7 @@ if build
 		'mlx5_vlan.c',
 		'mlx5_devx_cmds.c',
 		'mlx5_utils.c',
+		'rte_pmd_mlx5.c',
 	)
 	if (dpdk_conf.has('RTE_ARCH_X86_64')
 		or dpdk_conf.has('RTE_ARCH_ARM64')
@@ -186,6 +187,8 @@ if build
 		'RDMA_NLDEV_ATTR_PORT_INDEX' ],
 		[ 'HAVE_RDMA_NLDEV_ATTR_NDEV_INDEX', 'rdma/rdma_netlink.h',
 		'RDMA_NLDEV_ATTR_NDEV_INDEX' ],
+		[ 'HAVE_MLX5_DR_FLOW_DUMP', 'infiniband/mlx5dv.h',
+		'mlx5dv_dump_dr_domain'],
 	]
 	config = configuration_data()
 	foreach arg:has_sym_args
@@ -225,3 +228,4 @@ if pmd_dlopen and build
 		install_dir: dlopen_install_dir,
 	)
 endif
+install_headers('rte_pmd_mlx5.h')
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c3df8256ce..68b08a7f0b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1049,6 +1049,8 @@ struct mlx5_devx_obj *mlx5_devx_cmd_create_tis
 	(struct ibv_context *ctx, struct mlx5_devx_tis_attr *tis_attr);
 struct mlx5_devx_obj *mlx5_devx_cmd_create_td(struct ibv_context *ctx);
 
+int mlx5_devx_cmd_flow_dump(struct mlx5_ibv_shared *sh, FILE *file);
+
 /* mlx5_flow_meter.c */
 
 int mlx5_flow_meter_ops_get(struct rte_eth_dev *dev, void *arg);
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 9893287ba8..d6bf15689d 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -927,3 +927,38 @@ mlx5_devx_cmd_create_td(struct ibv_context *ctx)
 			   transport_domain);
 	return td;
 }
+
+/**
+ * Dump all flows to file.
+ *
+ * @param[in] sh
+ *   Pointer to context.
+ * @param[out] file
+ *   Pointer to file stream.
+ *
+ * @return
+ *   0 on success, a nagative value otherwise.
+ */
+int
+mlx5_devx_cmd_flow_dump(struct mlx5_ibv_shared *sh __rte_unused,
+			FILE *file __rte_unused)
+{
+	int ret = 0;
+
+#ifdef HAVE_MLX5_DR_FLOW_DUMP
+	if (sh->fdb_domain) {
+		ret = mlx5_glue->dr_dump_domain(file, sh->fdb_domain);
+		if (ret)
+			return ret;
+	}
+	assert(sh->rx_domain);
+	ret = mlx5_glue->dr_dump_domain(file, sh->rx_domain);
+	if (ret)
+		return ret;
+	assert(sh->tx_domain);
+	ret = mlx5_glue->dr_dump_domain(file, sh->tx_domain);
+#else
+	ret = ENOTSUP;
+#endif
+	return -ret;
+}
diff --git a/drivers/net/mlx5/mlx5_glue.c b/drivers/net/mlx5/mlx5_glue.c
index 0917bf28d6..4906eebc01 100644
--- a/drivers/net/mlx5/mlx5_glue.c
+++ b/drivers/net/mlx5/mlx5_glue.c
@@ -1037,6 +1037,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx,
 #endif
 }
 
+static int
+mlx5_glue_dr_dump_domain(FILE *file, void *domain)
+{
+#ifdef HAVE_MLX5_DR_FLOW_DUMP
+	return mlx5dv_dump_dr_domain(file, domain);
+#else
+	RTE_SET_USED(file);
+	RTE_SET_USED(domain);
+	return -ENOTSUP;
+#endif
+}
+
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.version = MLX5_GLUE_VERSION,
@@ -1134,4 +1146,5 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.devx_umem_dereg = mlx5_glue_devx_umem_dereg,
 	.devx_qp_query = mlx5_glue_devx_qp_query,
 	.devx_port_query = mlx5_glue_devx_port_query,
+	.dr_dump_domain = mlx5_glue_dr_dump_domain,
 };
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index 6442f1eba8..6771a18c64 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -256,6 +256,7 @@ struct mlx5_glue {
 	int (*devx_port_query)(struct ibv_context *ctx,
 			       uint32_t port_num,
 			       struct mlx5dv_devx_port *mlx5_devx_port);
+	int (*dr_dump_domain)(FILE *file, void *domain);
 };
 
 const struct mlx5_glue *mlx5_glue;
diff --git a/drivers/net/mlx5/rte_pmd_mlx5.c b/drivers/net/mlx5/rte_pmd_mlx5.c
new file mode 100644
index 0000000000..18fa12161a
--- /dev/null
+++ b/drivers/net/mlx5/rte_pmd_mlx5.c
@@ -0,0 +1,22 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/*
+ * Copyright 2019 Mellanox Technologies, Ltd
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "rte_pmd_mlx5.h"
+#include "mlx5.h"
+
+int
+rte_pmd_mlx5_flow_dump(uint16_t port_id, FILE *file)
+{
+	struct rte_eth_dev *dev;
+	struct mlx5_priv *priv;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+	dev = &rte_eth_devices[port_id];
+	priv = dev->data->dev_private;
+	return mlx5_devx_cmd_flow_dump(priv->sh, file);
+}
diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h
new file mode 100644
index 0000000000..f92a054541
--- /dev/null
+++ b/drivers/net/mlx5/rte_pmd_mlx5.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/*
+ * Copyright 2019 Mellanox Technologies, Ltd
+ */
+
+#ifndef _RTE_PMD_MLX5_H_
+#define _RTE_PMD_MLX5_H_
+
+/**
+ * @file
+ *
+ * Mellanox private RTE level APIs.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dump flow raw data to file
+ *
+ * @param port_id
+ *    The port identifier of the Ethernet device.
+ * @param file
+ *   The index of the transmit queue which want to query.
+ * @return
+ *   0 on success, a nagative value otherwise.
+ */
+__rte_experimental
+int
+rte_pmd_mlx5_flow_dump(uint16_t port_id, FILE *file);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map
index f9f17e4f6e..c7b70201d8 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5_version.map
+++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map
@@ -1,3 +1,10 @@
 DPDK_20.0 {
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pmd_mlx5_flow_dump;
+
+};
-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI
  2020-01-14  3:45 [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 1/4] net/mlx5: support flow dump Api Xiaoyu Min
@ 2020-01-14  3:45 ` Xiaoyu Min
  2020-01-14  4:31   ` Jerin Jacob
  2020-01-14  3:45 ` [dpdk-dev] [RFC 3/4] net/mlx5: add socket server for external tools Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 4/4] doc: update mlx5 document for flow dump feature Xiaoyu Min
  3 siblings, 1 reply; 9+ messages in thread
From: Xiaoyu Min @ 2020-01-14  3:45 UTC (permalink / raw)
  To: viacheslavo, matan, rasland, Wenzhuo Lu, Jingjing Wu,
	Bernard Iremonger, Adrien Mazarguil, Ori Kam
  Cc: dev, Xueming Li

From: Xueming Li <xuemingl@mellanox.com>

New flow dump CLI to dump MLX5 PMD specific flows into screen.

Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
---
 app/test-pmd/Makefile       |  4 ++
 app/test-pmd/cmdline_flow.c | 91 +++++++++++++++++++++++++++++++++++++
 app/test-pmd/config.c       | 33 ++++++++++++++
 app/test-pmd/meson.build    |  3 ++
 app/test-pmd/testpmd.h      |  1 +
 5 files changed, 132 insertions(+)

diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile
index d5258eae4a..e60c8ecf63 100644
--- a/app/test-pmd/Makefile
+++ b/app/test-pmd/Makefile
@@ -70,6 +70,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_SOFTNIC),y)
 LDLIBS += -lrte_pmd_softnic
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_MLX5_PMD),y)
+LDLIBS += -lrte_pmd_mlx5
+endif
+
 endif
 
 include $(RTE_SDK)/mk/rte.app.mk
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 99dade7d8c..19336e5d42 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -41,6 +41,7 @@ enum index {
 	BOOLEAN,
 	STRING,
 	HEX,
+	FILE_PATH,
 	MAC_ADDR,
 	IPV4_ADDR,
 	IPV6_ADDR,
@@ -63,6 +64,7 @@ enum index {
 	CREATE,
 	DESTROY,
 	FLUSH,
+	DUMP,
 	QUERY,
 	LIST,
 	ISOLATE,
@@ -631,6 +633,9 @@ struct buffer {
 			uint32_t *rule;
 			uint32_t rule_n;
 		} destroy; /**< Destroy arguments. */
+		struct {
+			char file[128];
+		} dump; /**< Dump arguments. */
 		struct {
 			uint32_t rule;
 			struct rte_flow_action action;
@@ -685,6 +690,12 @@ static const enum index next_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_dump_attr[] = {
+	FILE_PATH,
+	END,
+	ZERO,
+};
+
 static const enum index next_list_attr[] = {
 	LIST_GROUP,
 	END,
@@ -1374,6 +1385,9 @@ static int parse_destroy(struct context *, const struct token *,
 static int parse_flush(struct context *, const struct token *,
 		       const char *, unsigned int,
 		       void *, unsigned int);
+static int parse_dump(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_query(struct context *, const struct token *,
 		       const char *, unsigned int,
 		       void *, unsigned int);
@@ -1401,6 +1415,9 @@ static int parse_string(struct context *, const struct token *,
 static int parse_hex(struct context *ctx, const struct token *token,
 			const char *str, unsigned int len,
 			void *buf, unsigned int size);
+static int parse_string0(struct context *, const struct token *,
+			const char *, unsigned int,
+			void *, unsigned int);
 static int parse_mac_addr(struct context *, const struct token *,
 			  const char *, unsigned int,
 			  void *, unsigned int);
@@ -1494,6 +1511,12 @@ static const struct token token_list[] = {
 		.type = "HEX",
 		.help = "fixed string",
 		.call = parse_hex,
+	},
+	[FILE_PATH] = {
+		.name = "{file path}",
+		.type = "STRING",
+		.help = "file path",
+		.call = parse_string0,
 		.comp = comp_none,
 	},
 	[MAC_ADDR] = {
@@ -1555,6 +1578,7 @@ static const struct token token_list[] = {
 			      CREATE,
 			      DESTROY,
 			      FLUSH,
+			      DUMP,
 			      LIST,
 			      QUERY,
 			      ISOLATE)),
@@ -1589,6 +1613,14 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
 		.call = parse_flush,
 	},
+	[DUMP] = {
+		.name = "dump",
+		.help = "dump all flow rules to file",
+		.next = NEXT(next_dump_attr, NEXT_ENTRY(PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.dump.file),
+			     ARGS_ENTRY(struct buffer, port)),
+		.call = parse_dump,
+	},
 	[QUERY] = {
 		.name = "query",
 		.help = "query an existing flow rule",
@@ -5012,6 +5044,33 @@ parse_flush(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for dump command. */
+static int
+parse_dump(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != DUMP)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 /** Parse tokens for query command. */
 static int
 parse_query(struct context *ctx, const struct token *token,
@@ -5409,6 +5468,35 @@ parse_hex(struct context *ctx, const struct token *token,
 
 }
 
+/**
+ * Parse a zero-ended string.
+ */
+static int
+parse_string0(struct context *ctx, const struct token *token __rte_unused,
+	     const char *str, unsigned int len,
+	     void *buf, unsigned int size)
+{
+	const struct arg *arg_data = pop_args(ctx);
+
+	/* Arguments are expected. */
+	if (!arg_data)
+		return -1;
+	size = arg_data->size;
+	/* Bit-mask fill is not supported. */
+	if (arg_data->mask || size < len + 1)
+		goto error;
+	if (!ctx->object)
+		return len;
+	buf = (uint8_t *)ctx->object + arg_data->offset;
+	strncpy(buf, str, len);
+	if (ctx->objmask)
+		memset((uint8_t *)ctx->objmask + arg_data->offset, 0xff, len);
+	return len;
+error:
+	push_args(ctx, arg_data);
+	return -1;
+}
+
 /**
  * Parse a MAC address.
  *
@@ -6068,6 +6156,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case FLUSH:
 		port_flow_flush(in->port);
 		break;
+	case DUMP:
+		port_flow_dump(in->port, in->args.dump.file);
+		break;
 	case QUERY:
 		port_flow_query(in->port, in->args.query.rule,
 				&in->args.query.action);
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9da1ffb034..b5a9915df9 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -48,6 +48,9 @@
 #ifdef RTE_LIBRTE_BNXT_PMD
 #include <rte_pmd_bnxt.h>
 #endif
+#ifdef RTE_LIBRTE_MLX5_PMD
+#include <rte_pmd_mlx5.h>
+#endif
 #include <rte_gro.h>
 #include <rte_config.h>
 
@@ -1441,6 +1444,36 @@ port_flow_flush(portid_t port_id)
 	return ret;
 }
 
+/** Dump all flow rules. */
+int
+port_flow_dump(portid_t port_id __rte_unused,
+	       const char *file_name __rte_unused)
+{
+	int ret = 0;
+#ifdef RTE_LIBRTE_MLX5_PMD
+	FILE * file = stdout;
+
+	if (file_name && strlen(file_name)) {
+		file = fopen(file_name, "w");
+		if (!file) {
+			printf("Failed to create file %s: %s\n", file_name,
+			       strerror(errno));
+			return -errno;
+		}
+	}
+	ret = rte_pmd_mlx5_flow_dump(port_id, file);
+	if (ret)
+		printf("Failed to dump flow: %s\n", strerror(-ret));
+	else
+		printf("Flow dump finished\n");
+	if (file_name && strlen(file_name))
+		fclose(file);
+#else
+	printf("MLX5 PMD driver disabled\n");
+#endif
+	return ret;
+}
+
 /** Query a flow rule. */
 int
 port_flow_query(portid_t port_id, uint32_t rule,
diff --git a/app/test-pmd/meson.build b/app/test-pmd/meson.build
index 6006c60f99..a71e0a0cd1 100644
--- a/app/test-pmd/meson.build
+++ b/app/test-pmd/meson.build
@@ -48,3 +48,6 @@ if dpdk_conf.has('RTE_LIBRTE_BPF')
 	sources += files('bpf_cmd.c')
 	deps += 'bpf'
 endif
+if dpdk_conf.has('RTE_LIBRTE_MLX5_PMD')
+	deps += 'pmd_mlx5'
+endif
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 857a11f8de..e1b9aba360 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -734,6 +734,7 @@ int port_flow_create(portid_t port_id,
 		     const struct rte_flow_action *actions);
 int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule);
 int port_flow_flush(portid_t port_id);
+int port_flow_dump(portid_t port_id, const char *file_name);
 int port_flow_query(portid_t port_id, uint32_t rule,
 		    const struct rte_flow_action *action);
 void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [RFC 3/4] net/mlx5: add socket server for external tools
  2020-01-14  3:45 [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 1/4] net/mlx5: support flow dump Api Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI Xiaoyu Min
@ 2020-01-14  3:45 ` Xiaoyu Min
  2020-01-14  3:45 ` [dpdk-dev] [RFC 4/4] doc: update mlx5 document for flow dump feature Xiaoyu Min
  3 siblings, 0 replies; 9+ messages in thread
From: Xiaoyu Min @ 2020-01-14  3:45 UTC (permalink / raw)
  To: viacheslavo, matan, rasland, Shahaf Shuler, Anatoly Burakov
  Cc: dev, Xueming Li

From: Xueming Li <xuemingl@mellanox.com>

Add pmd unix socket server to enable external tool applications to
trigger flow dump.

Socket path:
	/var/tmp/dpdk_mlx5_<pid>
Socket format:
	io_raw: port_id of uint16
	file: file descriptor of int

Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
---
 drivers/net/mlx5/Makefile      |   1 +
 drivers/net/mlx5/meson.build   |   1 +
 drivers/net/mlx5/mlx5.c        |   2 +
 drivers/net/mlx5/mlx5.h        |   5 +
 drivers/net/mlx5/mlx5_socket.c | 226 +++++++++++++++++++++++++++++++++
 5 files changed, 235 insertions(+)
 create mode 100644 drivers/net/mlx5/mlx5_socket.c

diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index 0ff907445e..5f64d6ef02 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -40,6 +40,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_nl.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_devx_cmds.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += rte_pmd_mlx5.c
+SRCS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5_socket.c
 
 # Export include files of private API.
 SYMLINK-$(CONFIG_RTE_LIBRTE_MLX5_PMD)-include += rte_pmd_mlx5.h
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 1bdebcb6fc..8e5aabfdd3 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -62,6 +62,7 @@ if build
 		'mlx5_devx_cmds.c',
 		'mlx5_utils.c',
 		'rte_pmd_mlx5.c',
+		'mlx5_socket.c',
 	)
 	if (dpdk_conf.has('RTE_ARCH_X86_64')
 		or dpdk_conf.has('RTE_ARCH_ARM64')
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 50960c91ce..ffee39c1a0 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2954,6 +2954,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	struct mlx5_dev_config dev_config;
 	int ret;
 
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		mlx5_pmd_socket_init();
 	ret = mlx5_init_once();
 	if (ret) {
 		DRV_LOG(ERR, "unable to init PMD global data: %s",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 68b08a7f0b..92713551aa 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -988,6 +988,11 @@ void mlx5_mp_uninit_primary(void);
 int mlx5_mp_init_secondary(void);
 void mlx5_mp_uninit_secondary(void);
 
+/* mlx5_socket.c */
+
+int mlx5_pmd_socket_init(void);
+void mlx5_pmd_socket_uninit(void);
+
 /* mlx5_nl.c */
 
 int mlx5_nl_init(int protocol);
diff --git a/drivers/net/mlx5/mlx5_socket.c b/drivers/net/mlx5/mlx5_socket.c
new file mode 100644
index 0000000000..afd0ec9ac2
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_socket.c
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 Mellanox Technologies, Ltd
+ */
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/stat.h>
+
+#include "rte_eal.h"
+#include "mlx5_utils.h"
+#include "rte_pmd_mlx5.h"
+#include "mlx5.h"
+
+/* PMD socket service for tools. */
+
+int server_socket; /* Unix socket for primary process. */
+struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+
+static void
+mlx5_pmd_make_path(struct sockaddr_un *addr, int pid)
+{
+	snprintf(addr->sun_path, sizeof(addr->sun_path), "/var/tmp/dpdk_%s_%d",
+		 MLX5_DRIVER_NAME, pid);
+}
+
+/**
+ * Handle server pmd socket interrupts.
+ */
+static void
+mlx5_pmd_socket_handle(void *cb __rte_unused)
+{
+	int conn_sock;
+	int ret = -1;
+	struct cmsghdr *cmsg = NULL;
+	int data;
+	char buf[CMSG_SPACE(sizeof(int))] = { 0 };
+	struct iovec io = {
+		.iov_base = &data,
+		.iov_len = sizeof(data),
+	};
+	struct msghdr msg = {
+		.msg_iov = &io,
+		.msg_iovlen = 1,
+		.msg_control = buf,
+		.msg_controllen = sizeof(buf),
+	};
+	uint16_t port_id;
+	int fd;
+	FILE *file = NULL;
+
+	/* Accept the connection from the client. */
+	conn_sock = accept(server_socket, NULL, NULL);
+	if (conn_sock < 0) {
+		DRV_LOG(WARNING, "connection failed: %s", strerror(errno));
+		return;
+	}
+	ret = recvmsg(conn_sock, &msg, MSG_WAITALL);
+	if (ret < 0) {
+		DRV_LOG(WARNING, "wrong message received: %s",
+			strerror(errno));
+		goto error;
+	}
+	/* Receive file descriptor. */
+	cmsg = CMSG_FIRSTHDR(&msg);
+	if (cmsg == NULL || cmsg->cmsg_type != SCM_RIGHTS ||
+	    cmsg->cmsg_len < sizeof(int)) {
+		DRV_LOG(WARNING, "invalid file descriptor message");
+		goto error;
+	}
+	memcpy(&fd, CMSG_DATA(cmsg), sizeof(fd));
+	file = fdopen(fd, "w");
+	if (!file) {
+		DRV_LOG(WARNING, "Failed to open file");
+		goto error;
+	}
+	/* Receive port number. */
+	if (msg.msg_iovlen != 1 || msg.msg_iov->iov_len < sizeof(uint16_t)) {
+		DRV_LOG(WARNING, "wrong port number message");
+		goto error;
+	}
+	memcpy(&port_id, msg.msg_iov->iov_base, sizeof(port_id));
+	/* Dump flow. */
+	ret = rte_pmd_mlx5_flow_dump(port_id, file);
+	/* Set-up the ancillary data and reply. */
+	msg.msg_controllen = 0;
+	msg.msg_control = NULL;
+	msg.msg_iovlen = 1;
+	msg.msg_iov = &io;
+	data = -ret;
+	io.iov_len = sizeof(data);
+	io.iov_base = &data;
+	do {
+		ret = sendmsg(conn_sock, &msg, 0);
+	} while (ret < 0 && errno == EINTR);
+	if (ret < 0)
+		DRV_LOG(WARNING, "failed to send response %s",
+			strerror(errno));
+error:
+	if (conn_sock > 0)
+		close(conn_sock);
+	if (file)
+		fclose(file);
+}
+
+/**
+ * Install interrupt handler.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @return
+ *   0 on success, a negative errno value otherwise.
+ */
+static int
+mlx5_pmd_interrupt_handler_install(void)
+{
+	assert(server_socket);
+	server_intr_handle.fd = server_socket;
+	server_intr_handle.type = RTE_INTR_HANDLE_EXT;
+	return rte_intr_callback_register(&server_intr_handle,
+					  mlx5_pmd_socket_handle, NULL);
+}
+
+/**
+ * Uninstall interrupt handler.
+ */
+static void
+mlx5_pmd_interrupt_handler_uninstall(void)
+{
+	if (server_socket) {
+		mlx5_intr_callback_unregister(&server_intr_handle,
+					      mlx5_pmd_socket_handle,
+					      NULL);
+	}
+	server_intr_handle.fd = 0;
+	server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+}
+
+/**
+ * Initialise the socket to communicate with the secondary process
+ *
+ * @param[in] dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+int
+mlx5_pmd_socket_init(void)
+{
+	struct sockaddr_un sun = {
+		.sun_family = AF_UNIX,
+	};
+	int ret = -1;
+	int flags;
+
+	assert(rte_eal_process_type() == RTE_PROC_PRIMARY);
+	if (server_socket)
+		return 0;
+	/*
+	 * Initialize the socket to communicate with the secondary
+	 * process.
+	 */
+	ret = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (ret < 0) {
+		DRV_LOG(WARNING, "Failed to open mlx5 socket: %s",
+			strerror(errno));
+		goto error;
+	}
+	server_socket = ret;
+	flags = fcntl(server_socket, F_GETFL, 0);
+	if (flags == -1)
+		goto error;
+	ret = fcntl(server_socket, F_SETFL, flags | O_NONBLOCK);
+	if (ret < 0)
+		goto error;
+	mlx5_pmd_make_path(&sun, getpid());
+	remove(sun.sun_path);
+	ret = bind(server_socket, (const struct sockaddr *)&sun, sizeof(sun));
+	if (ret < 0) {
+		DRV_LOG(WARNING,
+			"cannot bind mlx5 socket: %s", strerror(errno));
+		goto close;
+	}
+	ret = listen(server_socket, 0);
+	if (ret < 0) {
+		DRV_LOG(WARNING, "cannot listen on mlx5 socket: %s",
+			strerror(errno));
+		goto close;
+	}
+	if (mlx5_pmd_interrupt_handler_install()) {
+		DRV_LOG(WARNING, "cannot register interrupt handler for mlx5 socket: %s",
+			strerror(errno));
+		goto close;
+	}
+	return 0;
+close:
+	remove(sun.sun_path);
+error:
+	claim_zero(close(server_socket));
+	server_socket = 0;
+	DRV_LOG(ERR, "Cannot initialize socket: %s", strerror(errno));
+	return -errno;
+}
+
+/**
+ * Un-Initialize the pmd socket
+ */
+void __attribute__((destructor))
+mlx5_pmd_socket_uninit(void)
+{
+	if (!server_socket)
+		return;
+	mlx5_pmd_interrupt_handler_uninstall();
+	MKSTR(path, "/var/tmp/dpdk_%s_%d", MLX5_DRIVER_NAME, getpid());
+	claim_zero(close(server_socket));
+	server_socket = 0;
+	claim_zero(remove(path));
+}
-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [RFC 4/4] doc: update mlx5 document for flow dump feature
  2020-01-14  3:45 [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW Xiaoyu Min
                   ` (2 preceding siblings ...)
  2020-01-14  3:45 ` [dpdk-dev] [RFC 3/4] net/mlx5: add socket server for external tools Xiaoyu Min
@ 2020-01-14  3:45 ` Xiaoyu Min
  3 siblings, 0 replies; 9+ messages in thread
From: Xiaoyu Min @ 2020-01-14  3:45 UTC (permalink / raw)
  To: viacheslavo, matan, rasland, Shahaf Shuler, John McNamara,
	Marko Kovacevic
  Cc: dev, Xueming Li

From: Xueming Li <xuemingl@mellanox.com>

Guide of mlx5 is updated on how to dump HW flows.

Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
---
 doc/guides/nics/mlx5.rst | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 92228d3cca..4781aae0aa 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1316,3 +1316,31 @@ ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_pmd_mlx5.
       Port 3 Link Up - speed 10000 Mbps - full-duplex
       Done
       testpmd>
+
+HowTo dump flows
+----------------
+
+This section demonstrates how to dump flows. Currently, it's possible to dump
+all flows with assistence of external tools.
+
+#. 2 ways to get flow raw file:
+
+   - Using testpmd CLI:
+
+   .. code-block:: console
+
+       testpmd> flow dump <port> <output_file>
+
+   - call mlx5 PMD api in rte_pmd_mlx5.h:
+
+   .. code-block:: console
+
+       rte_pmd_mlx5_flow_dump(port, file);
+
+#. Dump humanreadable flows from raw file:
+
+   Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump
+
+   .. code-block:: console
+
+       mlx_steering_dump.py -f <output_file>
-- 
2.24.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI
  2020-01-14  3:45 ` [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI Xiaoyu Min
@ 2020-01-14  4:31   ` Jerin Jacob
  2020-01-14 10:15     ` Jack Min
  0 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob @ 2020-01-14  4:31 UTC (permalink / raw)
  To: Xiaoyu Min
  Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Adrien Mazarguil, Ori Kam,
	dpdk-dev, Xueming Li

On Tue, Jan 14, 2020 at 9:15 AM Xiaoyu Min <jackmin@mellanox.com> wrote:
>
> From: Xueming Li <xuemingl@mellanox.com>
>
> New flow dump CLI to dump MLX5 PMD specific flows into screen.
>
> Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
> ---
>  app/test-pmd/Makefile       |  4 ++
>  app/test-pmd/cmdline_flow.c | 91 +++++++++++++++++++++++++++++++++++++
>  app/test-pmd/config.c       | 33 ++++++++++++++
>  app/test-pmd/meson.build    |  3 ++
>  app/test-pmd/testpmd.h      |  1 +
>  5 files changed, 132 insertions(+)
>

>
> +/** Dump all flow rules. */
> +int
> +port_flow_dump(portid_t port_id __rte_unused,
> +              const char *file_name __rte_unused)
> +{
> +       int ret = 0;
> +#ifdef RTE_LIBRTE_MLX5_PMD

IMO, It should be the last resort to add driver-specific symbols in testpmd.
Why not introduce rte_flow_dump() and hook the MLX driver underneath?

> +       FILE * file = stdout;
> +
> +       if (file_name && strlen(file_name)) {
> +               file = fopen(file_name, "w");
> +               if (!file) {
> +                       printf("Failed to create file %s: %s\n", file_name,
> +                              strerror(errno));
> +                       return -errno;
> +               }
> +       }
> +       ret = rte_pmd_mlx5_flow_dump(port_id, file);
> +       if (ret)
> +               printf("Failed to dump flow: %s\n", strerror(-ret));
> +       else
> +               printf("Flow dump finished\n");
> +       if (file_name && strlen(file_name))
> +               fclose(file);
> +#else
> +       printf("MLX5 PMD driver disabled\n");
> +#endif
> +       return ret;
> +}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI
  2020-01-14  4:31   ` Jerin Jacob
@ 2020-01-14 10:15     ` Jack Min
  2020-01-14 14:00       ` Jerin Jacob
  0 siblings, 1 reply; 9+ messages in thread
From: Jack Min @ 2020-01-14 10:15 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Adrien Mazarguil, Ori Kam,
	dpdk-dev, Xueming(Steven) Li

On Tue, 20-01-14, 10:01, Jerin Jacob wrote:
> On Tue, Jan 14, 2020 at 9:15 AM Xiaoyu Min <jackmin@mellanox.com> wrote:
> >
> > From: Xueming Li <xuemingl@mellanox.com>
> >
> > New flow dump CLI to dump MLX5 PMD specific flows into screen.
> >
> > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> > Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
> > ---
> >  app/test-pmd/Makefile       |  4 ++
> >  app/test-pmd/cmdline_flow.c | 91 +++++++++++++++++++++++++++++++++++++
> >  app/test-pmd/config.c       | 33 ++++++++++++++
> >  app/test-pmd/meson.build    |  3 ++
> >  app/test-pmd/testpmd.h      |  1 +
> >  5 files changed, 132 insertions(+)
> >
> 
> >
> > +/** Dump all flow rules. */
> > +int
> > +port_flow_dump(portid_t port_id __rte_unused,
> > +              const char *file_name __rte_unused)
> > +{
> > +       int ret = 0;
> > +#ifdef RTE_LIBRTE_MLX5_PMD
> 
> IMO, It should be the last resort to add driver-specific symbols in testpmd.
> Why not introduce rte_flow_dump() and hook the MLX driver underneath?
> 
Hey Jerin,

Thanks for you comments.

What my understanding is this flow dump is very Mellanox specific, it will dump
all flows in Mellanox HW using Mellanox format. They are hardware flows in
short, which are different from rte flow.

I don't know whether other vendor has the similar functionality and needs
so an rte flow level API could be helpful.

And basically rte flow API is based on rte_flow, a dump function probabily means
dump rte_flow itself (i.e flow->attr, pattern, actions).

This is the reason a private API is choosen and driver-specific symbols added
in testpmd as result.

-Jack

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI
  2020-01-14 10:15     ` Jack Min
@ 2020-01-14 14:00       ` Jerin Jacob
  2020-01-15 12:49         ` Jack Min
  0 siblings, 1 reply; 9+ messages in thread
From: Jerin Jacob @ 2020-01-14 14:00 UTC (permalink / raw)
  To: Jack Min
  Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Adrien Mazarguil, Ori Kam,
	dpdk-dev, Xueming(Steven) Li

On Tue, Jan 14, 2020 at 3:45 PM Jack Min <jackmin@mellanox.com> wrote:
>
> On Tue, 20-01-14, 10:01, Jerin Jacob wrote:
> > On Tue, Jan 14, 2020 at 9:15 AM Xiaoyu Min <jackmin@mellanox.com> wrote:
> > >
> > > From: Xueming Li <xuemingl@mellanox.com>
> > >
> > > New flow dump CLI to dump MLX5 PMD specific flows into screen.
> > >
> > > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> > > Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
> > > ---
> > >  app/test-pmd/Makefile       |  4 ++
> > >  app/test-pmd/cmdline_flow.c | 91 +++++++++++++++++++++++++++++++++++++
> > >  app/test-pmd/config.c       | 33 ++++++++++++++
> > >  app/test-pmd/meson.build    |  3 ++
> > >  app/test-pmd/testpmd.h      |  1 +
> > >  5 files changed, 132 insertions(+)
> > >
> >
> > >
> > > +/** Dump all flow rules. */
> > > +int
> > > +port_flow_dump(portid_t port_id __rte_unused,
> > > +              const char *file_name __rte_unused)
> > > +{
> > > +       int ret = 0;
> > > +#ifdef RTE_LIBRTE_MLX5_PMD
> >
> > IMO, It should be the last resort to add driver-specific symbols in testpmd.
> > Why not introduce rte_flow_dump() and hook the MLX driver underneath?
> >
> Hey Jerin,

Hi Jack.

>
> Thanks for you comments.
>
> What my understanding is this flow dump is very Mellanox specific, it will dump
> all flows in Mellanox HW using Mellanox format. They are hardware flows in
> short, which are different from rte flow.

We do have a similar API for other drivers to represent the internal info.
I think, similar API[1] would suffice for your case. We should not
standardize the output of dump, instead, it will provide a generic API
to dump the internal representation of flow HW to file/stdout.

Octeontx2 MACM HW (rte_flow) has similar internal information,
We could implement the driver API if it is a generic API.

[1]
See the below example for eventdev driver.

/**
 * Dump internal information about *dev_id* to the FILE* provided in *f*.
 *
 * @param dev_id
 *   The identifier of the device.
 *
 * @param f
 *   A pointer to a file for output
 *
 * @return
 *   - 0: on success
 *   - <0: on failure.
 */
int
rte_event_dev_dump(uint8_t dev_id, FILE *f);


>
> I don't know whether other vendor has the similar functionality and needs
> so an rte flow level API could be helpful.
>
> And basically rte flow API is based on rte_flow, a dump function probabily means
> dump rte_flow itself (i.e flow->attr, pattern, actions).
>
> This is the reason a private API is choosen and driver-specific symbols added
> in testpmd as result.
>
> -Jack

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI
  2020-01-14 14:00       ` Jerin Jacob
@ 2020-01-15 12:49         ` Jack Min
  0 siblings, 0 replies; 9+ messages in thread
From: Jack Min @ 2020-01-15 12:49 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Slava Ovsiienko, Matan Azrad, Raslan Darawsheh, Wenzhuo Lu,
	Jingjing Wu, Bernard Iremonger, Adrien Mazarguil, Ori Kam,
	dpdk-dev, Xueming(Steven) Li

On Tue, 20-01-14, 19:30, Jerin Jacob wrote:
> On Tue, Jan 14, 2020 at 3:45 PM Jack Min <jackmin@mellanox.com> wrote:
> >
> > On Tue, 20-01-14, 10:01, Jerin Jacob wrote:
> > > On Tue, Jan 14, 2020 at 9:15 AM Xiaoyu Min <jackmin@mellanox.com> wrote:
> > > >
> > > > From: Xueming Li <xuemingl@mellanox.com>
> > > >
> > > > New flow dump CLI to dump MLX5 PMD specific flows into screen.
> > > >
> > > > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> > > > Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
> > > > ---
> > > >  app/test-pmd/Makefile       |  4 ++
> > > >  app/test-pmd/cmdline_flow.c | 91 +++++++++++++++++++++++++++++++++++++
> > > >  app/test-pmd/config.c       | 33 ++++++++++++++
> > > >  app/test-pmd/meson.build    |  3 ++
> > > >  app/test-pmd/testpmd.h      |  1 +
> > > >  5 files changed, 132 insertions(+)
> > > >
> > >
> > > >
> > > > +/** Dump all flow rules. */
> > > > +int
> > > > +port_flow_dump(portid_t port_id __rte_unused,
> > > > +              const char *file_name __rte_unused)
> > > > +{
> > > > +       int ret = 0;
> > > > +#ifdef RTE_LIBRTE_MLX5_PMD
> > >
> > > IMO, It should be the last resort to add driver-specific symbols in testpmd.
> > > Why not introduce rte_flow_dump() and hook the MLX driver underneath?
> > >
> > Hey Jerin,
> 
> Hi Jack.
> 
> >
> > Thanks for you comments.
> >
> > What my understanding is this flow dump is very Mellanox specific, it will dump
> > all flows in Mellanox HW using Mellanox format. They are hardware flows in
> > short, which are different from rte flow.
> 
> We do have a similar API for other drivers to represent the internal info.
> I think, similar API[1] would suffice for your case. We should not
> standardize the output of dump, instead, it will provide a generic API
> to dump the internal representation of flow HW to file/stdout.
> 
> Octeontx2 MACM HW (rte_flow) has similar internal information,
> We could implement the driver API if it is a generic API.
Yes, a generic rte_flow API (as similar as [1])can benifit both of us. :-)
I'll propose a new rte_flow API for this in v2.

Thanks,
-Jack

> 
> [1]
> See the below example for eventdev driver.
> 
> /**
>  * Dump internal information about *dev_id* to the FILE* provided in *f*.
>  *
>  * @param dev_id
>  *   The identifier of the device.
>  *
>  * @param f
>  *   A pointer to a file for output
>  *
>  * @return
>  *   - 0: on success
>  *   - <0: on failure.
>  */
> int
> rte_event_dev_dump(uint8_t dev_id, FILE *f);
> 
> 
> >
> > I don't know whether other vendor has the similar functionality and needs
> > so an rte flow level API could be helpful.
> >
> > And basically rte flow API is based on rte_flow, a dump function probabily means
> > dump rte_flow itself (i.e flow->attr, pattern, actions).
> >
> > This is the reason a private API is choosen and driver-specific symbols added
> > in testpmd as result.
> >
> > -Jack

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-01-15 12:49 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-14  3:45 [dpdk-dev] [RFC 0/4] net/mlx5: dump software steering flows in HW Xiaoyu Min
2020-01-14  3:45 ` [dpdk-dev] [RFC 1/4] net/mlx5: support flow dump Api Xiaoyu Min
2020-01-14  3:45 ` [dpdk-dev] [RFC 2/4] app/testpmd: new flow dump CLI Xiaoyu Min
2020-01-14  4:31   ` Jerin Jacob
2020-01-14 10:15     ` Jack Min
2020-01-14 14:00       ` Jerin Jacob
2020-01-15 12:49         ` Jack Min
2020-01-14  3:45 ` [dpdk-dev] [RFC 3/4] net/mlx5: add socket server for external tools Xiaoyu Min
2020-01-14  3:45 ` [dpdk-dev] [RFC 4/4] doc: update mlx5 document for flow dump feature Xiaoyu Min

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).