DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2 0/2] add new PHY affinity in the flow item and Tx queue API
       [not found] <http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/>
@ 2023-01-30 17:00 ` Jiawei Wang
  2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
  2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
  0 siblings, 2 replies; 12+ messages in thread
From: Jiawei Wang @ 2023-01-30 17:00 UTC (permalink / raw)
  To: viacheslavo, orika, thomas; +Cc: dev, rasland

For the multiple hardware ports connect to a single DPDK port (mhpsdp),
currently, there is no information to indicate the packet belongs to
which hardware port.

This patch introduces a new phy affinity item in rte flow API, and
the phy affinity value reflects the physical phy affinity of the
received packets.

This patch adds the tx_phy_affinity setting in Tx queue API, the affinity value
reflects packets be sent to which hardware port.

While uses the phy affinity as a matching item in the flow, and sets the
same affinity on the tx queue, then the packet can be sent from the same
hardware port with received.

RFC: http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/

The PMD patch will be sent soon.

Jiawei Wang (2):
  ethdev: add PHY affinity match item
  ethdev: introduce the PHY affinity field in Tx queue API

 app/test-pmd/cmdline.c                      | 84 +++++++++++++++++++++
 app/test-pmd/cmdline_flow.c                 | 29 +++++++
 devtools/libabigail.abignore                |  5 ++
 doc/guides/prog_guide/rte_flow.rst          |  8 ++
 doc/guides/rel_notes/release_23_03.rst      |  5 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 17 +++++
 lib/ethdev/rte_ethdev.h                     |  7 ++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 28 +++++++
 9 files changed, 184 insertions(+)

-- 
2.18.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/2] ethdev: add PHY affinity match item
  2023-01-30 17:00 ` [PATCH v2 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
@ 2023-01-30 17:00   ` Jiawei Wang
  2023-01-31 14:36     ` Ori Kam
  2023-02-01  8:50     ` Andrew Rybchenko
  2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
  1 sibling, 2 replies; 12+ messages in thread
From: Jiawei Wang @ 2023-01-30 17:00 UTC (permalink / raw)
  To: viacheslavo, orika, thomas, Aman Singh, Yuying Zhang,
	Ferruh Yigit, Andrew Rybchenko
  Cc: dev, rasland

For the multiple hardware ports connect to a single DPDK port (mhpsdp),
currently, there is no information to indicate the packet belongs to
which hardware port.

This patch introduces a new phy affinity item in rte flow API, and
the phy affinity value reflects the physical port of the received packets.

While uses the phy affinity as a matching item in the flow, and sets the
same phy_affinity value on the tx queue, then the packet can be sent from
the same hardware port with received.

This patch also adds the testpmd command line to match the new item:
	flow create 0 ingress group 0 pattern phy_affinity affinity is 1 /
	end actions queue index 0 / end

The above command means that creates a flow on a single DPDK port and
matches the packet from the first physical port (assume the phy affinity 1
stands for the first port) and redirects these packets into RxQ 0.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 29 +++++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst          |  8 ++++++
 doc/guides/rel_notes/release_23_03.rst      |  5 ++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 +++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 28 ++++++++++++++++++++
 6 files changed, 75 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 88108498e0..a6d4615038 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -465,6 +465,8 @@ enum index {
 	ITEM_METER,
 	ITEM_METER_COLOR,
 	ITEM_METER_COLOR_NAME,
+	ITEM_PHY_AFFINITY,
+	ITEM_PHY_AFFINITY_VALUE,
 
 	/* Validate/create actions. */
 	ACTIONS,
@@ -1355,6 +1357,7 @@ static const enum index next_item[] = {
 	ITEM_L2TPV2,
 	ITEM_PPP,
 	ITEM_METER,
+	ITEM_PHY_AFFINITY,
 	END_SET,
 	ZERO,
 };
@@ -1821,6 +1824,12 @@ static const enum index item_meter[] = {
 	ZERO,
 };
 
+static const enum index item_phy_affinity[] = {
+	ITEM_PHY_AFFINITY_VALUE,
+	ITEM_NEXT,
+	ZERO,
+};
+
 static const enum index next_action[] = {
 	ACTION_END,
 	ACTION_VOID,
@@ -6443,6 +6452,23 @@ static const struct token token_list[] = {
 				ARGS_ENTRY(struct buffer, port)),
 		.call = parse_mp,
 	},
+	[ITEM_PHY_AFFINITY] = {
+		.name = "phy_affinity",
+		.help = "match on the physical affinity of the"
+			" received packet.",
+		.priv = PRIV_ITEM(PHY_AFFINITY,
+				  sizeof(struct rte_flow_item_phy_affinity)),
+		.next = NEXT(item_phy_affinity),
+		.call = parse_vc,
+	},
+	[ITEM_PHY_AFFINITY_VALUE] = {
+		.name = "affinity",
+		.help = "physical affinity value",
+		.next = NEXT(item_phy_affinity, NEXT_ENTRY(COMMON_UNSIGNED),
+			     item_param),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_phy_affinity,
+					affinity)),
+	},
 };
 
 /** Remove and return last entry from argument stack. */
@@ -10981,6 +11007,9 @@ flow_item_default_mask(const struct rte_flow_item *item)
 	case RTE_FLOW_ITEM_TYPE_METER_COLOR:
 		mask = &rte_flow_item_meter_color_mask;
 		break;
+	case RTE_FLOW_ITEM_TYPE_PHY_AFFINITY:
+		mask = &rte_flow_item_phy_affinity_mask;
+		break;
 	default:
 		break;
 	}
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 3e6242803d..3b4e8923dc 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1544,6 +1544,14 @@ Matches Color Marker set by a Meter.
 
 - ``color``: Metering color marker.
 
+Item: ``PHY_AFFINITY``
+^^^^^^^^^^^^^^^^^^^^^^
+
+Matches on the physical affinity of the received packet, the physical port
+in the group of physical ports connected to a single DPDK port.
+
+- ``affinity``: Physical affinity.
+
 Actions
 ~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index c15f6fbb9f..a1abd67771 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -69,6 +69,11 @@ New Features
     ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
     required for eth_rx, eth_tx, crypto and timer eventdev adapters.
 
+* **Added rte_flow support for matching PHY Affinity fields.**
+
+  For the multiple hardware ports connect to a single DPDK port (mhpsdp),
+  Added ``phy_affinity`` item in rte_flow to support physical affinity of
+  the packets.
 
 Removed Items
 -------------
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0037506a79..1853030e93 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3712,6 +3712,10 @@ This section lists supported pattern items and their attributes, if any.
 
   - ``color {value}``: meter color value (green/yellow/red).
 
+- ``phy_affinity``: match physical affinity.
+
+  - ``affinity {value}``: physical affinity value.
+
 - ``send_to_kernel``: send packets to kernel.
 
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7d0c24366c..0c2d3b679b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -157,6 +157,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
 	MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)),
 	MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)),
 	MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)),
+	MK_FLOW_ITEM(PHY_AFFINITY, sizeof(struct rte_flow_item_phy_affinity)),
 };
 
 /** Generate flow_action[] entry. */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b60987db4b..56c04ea37c 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -624,6 +624,13 @@ enum rte_flow_item_type {
 	 * See struct rte_flow_item_meter_color.
 	 */
 	RTE_FLOW_ITEM_TYPE_METER_COLOR,
+
+	/**
+	 * Matches on the physical affinity of the received packet.
+	 *
+	 * @see struct rte_flow_item_phy_affinity.
+	 */
+	RTE_FLOW_ITEM_TYPE_PHY_AFFINITY,
 };
 
 /**
@@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = {
 };
 #endif
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
+ * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY
+ *
+ * For the multiple hardware ports connect to a single DPDK port (mhpsdp),
+ * use this item to match the physical affinity of the packets.
+ */
+struct rte_flow_item_phy_affinity {
+	uint8_t affinity; /**< physical affinity value. */
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_PHY_AFFINITY. */
+#ifndef __cplusplus
+static const struct rte_flow_item_phy_affinity
+rte_flow_item_phy_affinity_mask = {
+	.affinity = 0xff,
+};
+#endif
+
 /**
  * Action types.
  *
-- 
2.18.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-01-30 17:00 ` [PATCH v2 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
  2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
@ 2023-01-30 17:00   ` Jiawei Wang
  2023-01-31 17:26     ` Thomas Monjalon
  2023-02-01  9:05     ` Andrew Rybchenko
  1 sibling, 2 replies; 12+ messages in thread
From: Jiawei Wang @ 2023-01-30 17:00 UTC (permalink / raw)
  To: viacheslavo, orika, thomas, Aman Singh, Yuying Zhang,
	Ferruh Yigit, Andrew Rybchenko
  Cc: dev, rasland

For the multiple hardware ports connect to a single DPDK port (mhpsdp),
the previous patch introduces the new rte flow item to match the
phy affinity of the received packets.

This patch adds the tx_phy_affinity setting in Tx queue API, the affinity
value reflects packets be sent to which hardware port.
Value 0 is no affinity and traffic will be routed between different
physical ports, if 0 is disabled then try to match on phy_affinity 0
will result in an error.

Adds the new tx_phy_affinity field into the padding hole of rte_eth_txconf
structure, the size of rte_eth_txconf keeps the same. Adds a suppress
type for structure change in the ABI check file.

This patch adds the testpmd command line:
testpmd> port config (port_id) txq (queue_id) phy_affinity (value)

For example, there're two hardware ports 0 and 1 connected to
a single DPDK port (port id 0), and phy_affinity 1 stood for
hardware port 0 and phy_affinity 2 stood for hardware port 1,
used the below command to config tx phy affinity for per Tx Queue:
        port config 0 txq 0 phy_affinity 1
        port config 0 txq 1 phy_affinity 1
        port config 0 txq 2 phy_affinity 2
        port config 0 txq 3 phy_affinity 2

These commands config the TxQ index 0 and TxQ index 1 with phy affinity 1,
uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the
hardware port 0, and similar with hardware port 1 if sending packets
with TxQ 2 or TxQ 3.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
---
 app/test-pmd/cmdline.c                      | 84 +++++++++++++++++++++
 devtools/libabigail.abignore                |  5 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 13 ++++
 lib/ethdev/rte_ethdev.h                     |  7 ++
 4 files changed, 109 insertions(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b32dc8bfd4..768f35cb02 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -764,6 +764,10 @@ static void cmd_help_long_parsed(void *parsed_result,
 
 			"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
 			"    Cleanup txq mbufs for a specific Tx queue\n\n"
+
+			"port config (port_id) txq (queue_id) phy_affinity (value)\n"
+			"    Set the physical affinity value "
+			"on a specific Tx queue\n\n"
 		);
 	}
 
@@ -12621,6 +12625,85 @@ static cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = {
 	}
 };
 
+/* *** configure port txq phy_affinity value *** */
+struct cmd_config_tx_phy_affinity {
+	cmdline_fixed_string_t port;
+	cmdline_fixed_string_t config;
+	portid_t portid;
+	cmdline_fixed_string_t txq;
+	uint16_t qid;
+	cmdline_fixed_string_t phy_affinity;
+	uint16_t value;
+};
+
+static void
+cmd_config_tx_phy_affinity_parsed(void *parsed_result,
+				  __rte_unused struct cmdline *cl,
+				  __rte_unused void *data)
+{
+	struct cmd_config_tx_phy_affinity *res = parsed_result;
+	struct rte_port *port;
+
+	if (port_id_is_invalid(res->portid, ENABLED_WARN))
+		return;
+
+	if (res->portid == (portid_t)RTE_PORT_ALL) {
+		printf("Invalid port id\n");
+		return;
+	}
+
+	port = &ports[res->portid];
+
+	if (strcmp(res->txq, "txq")) {
+		printf("Unknown parameter\n");
+		return;
+	}
+	if (tx_queue_id_is_invalid(res->qid))
+		return;
+
+	port->txq[res->qid].conf.tx_phy_affinity = res->value;
+
+	cmd_reconfig_device_queue(res->portid, 0, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
+				 port, "port");
+cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
+				 config, "config");
+cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
+				 portid, RTE_UINT16);
+cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
+				 txq, "txq");
+cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
+			      qid, RTE_UINT16);
+cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport =
+	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
+				 phy_affinity, "phy_affinity");
+cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value =
+	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
+			      value, RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_config_tx_phy_affinity = {
+	.f = cmd_config_tx_phy_affinity_parsed,
+	.data = (void *)0,
+	.help_str = "port config <port_id> txq <queue_id> phy_affinity <value>",
+	.tokens = {
+		(void *)&cmd_config_tx_phy_affinity_port,
+		(void *)&cmd_config_tx_phy_affinity_config,
+		(void *)&cmd_config_tx_phy_affinity_portid,
+		(void *)&cmd_config_tx_phy_affinity_txq,
+		(void *)&cmd_config_tx_phy_affinity_qid,
+		(void *)&cmd_config_tx_phy_affinity_hwport,
+		(void *)&cmd_config_tx_phy_affinity_value,
+		NULL,
+	},
+};
+
 /* ******************************************************************************** */
 
 /* list of instructions */
@@ -12851,6 +12934,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
 	(cmdline_parse_inst_t *)&cmd_show_capability,
 	(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
 	(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
+	(cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity,
 	NULL,
 };
 
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 7a93de3ba1..cbbde4ef05 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -20,6 +20,11 @@
 [suppress_file]
         soname_regexp = ^librte_.*mlx.*glue\.
 
+; Ignore fields inserted in middle padding of rte_eth_txconf
+[suppress_type]
+        name = rte_eth_txconf
+        has_data_member_inserted_between = {offset_after(tx_deferred_start), offset_of(offloads)}
+
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
 ; Experimental APIs exceptions ;
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 1853030e93..e9f20607a2 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue::
 
 This command should be run when the port is stopped, or else it will fail.
 
+config per queue Tx physical affinity
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure a per queue physical affinity value only on a specific Tx queue::
+
+   testpmd> port (port_id) txq (queue_id) phy_affinity (value)
+
+* ``phy_affinity``: reflects packet can be sent to which hardware port.
+                    uses it on multiple hardware ports connect to
+                    a single DPDK port (mhpsdp).
+
+This command should be run when the port is stopped, or else it will fail.
+
 Config VXLAN Encap outer layers
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index c129ca1eaf..b30467c192 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1138,6 +1138,13 @@ struct rte_eth_txconf {
 				      less free descriptors than this value. */
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Physical affinity to be set.
+	 * Value 0 is no affinity and traffic could be routed between different
+	 * physical ports, if 0 is disabled then try to match on phy_affinity 0 will
+	 * result in an error.
+	 */
+	uint8_t tx_phy_affinity;
 	/**
 	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
-- 
2.18.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v2 1/2] ethdev: add PHY affinity match item
  2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
@ 2023-01-31 14:36     ` Ori Kam
  2023-02-01  8:50     ` Andrew Rybchenko
  1 sibling, 0 replies; 12+ messages in thread
From: Ori Kam @ 2023-01-31 14:36 UTC (permalink / raw)
  To: Jiawei(Jonny) Wang, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Aman Singh, Yuying Zhang, Ferruh Yigit, Andrew Rybchenko
  Cc: dev, Raslan Darawsheh

Hi Jiawei,

> -----Original Message-----
> From: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>
> Sent: Monday, 30 January 2023 19:01
> 
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> currently, there is no information to indicate the packet belongs to
> which hardware port.
> 
> This patch introduces a new phy affinity item in rte flow API, and
> the phy affinity value reflects the physical port of the received packets.
> 
> While uses the phy affinity as a matching item in the flow, and sets the
> same phy_affinity value on the tx queue, then the packet can be sent from
> the same hardware port with received.
> 
> This patch also adds the testpmd command line to match the new item:
> 	flow create 0 ingress group 0 pattern phy_affinity affinity is 1 /
> 	end actions queue index 0 / end
> 
> The above command means that creates a flow on a single DPDK port and
> matches the packet from the first physical port (assume the phy affinity 1
> stands for the first port) and redirects these packets into RxQ 0.
> 
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> ---
>  app/test-pmd/cmdline_flow.c                 | 29 +++++++++++++++++++++
>  doc/guides/prog_guide/rte_flow.rst          |  8 ++++++
>  doc/guides/rel_notes/release_23_03.rst      |  5 ++++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  4 +++
>  lib/ethdev/rte_flow.c                       |  1 +
>  lib/ethdev/rte_flow.h                       | 28 ++++++++++++++++++++
>  6 files changed, 75 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 88108498e0..a6d4615038 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -465,6 +465,8 @@ enum index {
>  	ITEM_METER,
>  	ITEM_METER_COLOR,
>  	ITEM_METER_COLOR_NAME,
> +	ITEM_PHY_AFFINITY,
> +	ITEM_PHY_AFFINITY_VALUE,
> 
>  	/* Validate/create actions. */
>  	ACTIONS,
> @@ -1355,6 +1357,7 @@ static const enum index next_item[] = {
>  	ITEM_L2TPV2,
>  	ITEM_PPP,
>  	ITEM_METER,
> +	ITEM_PHY_AFFINITY,
>  	END_SET,
>  	ZERO,
>  };
> @@ -1821,6 +1824,12 @@ static const enum index item_meter[] = {
>  	ZERO,
>  };
> 
> +static const enum index item_phy_affinity[] = {
> +	ITEM_PHY_AFFINITY_VALUE,
> +	ITEM_NEXT,
> +	ZERO,
> +};
> +
>  static const enum index next_action[] = {
>  	ACTION_END,
>  	ACTION_VOID,
> @@ -6443,6 +6452,23 @@ static const struct token token_list[] = {
>  				ARGS_ENTRY(struct buffer, port)),
>  		.call = parse_mp,
>  	},
> +	[ITEM_PHY_AFFINITY] = {
> +		.name = "phy_affinity",
> +		.help = "match on the physical affinity of the"
> +			" received packet.",
> +		.priv = PRIV_ITEM(PHY_AFFINITY,
> +				  sizeof(struct rte_flow_item_phy_affinity)),
> +		.next = NEXT(item_phy_affinity),
> +		.call = parse_vc,
> +	},
> +	[ITEM_PHY_AFFINITY_VALUE] = {
> +		.name = "affinity",
> +		.help = "physical affinity value",
> +		.next = NEXT(item_phy_affinity,
> NEXT_ENTRY(COMMON_UNSIGNED),
> +			     item_param),
> +		.args = ARGS(ARGS_ENTRY(struct rte_flow_item_phy_affinity,
> +					affinity)),
> +	},
>  };
> 
>  /** Remove and return last entry from argument stack. */
> @@ -10981,6 +11007,9 @@ flow_item_default_mask(const struct
> rte_flow_item *item)
>  	case RTE_FLOW_ITEM_TYPE_METER_COLOR:
>  		mask = &rte_flow_item_meter_color_mask;
>  		break;
> +	case RTE_FLOW_ITEM_TYPE_PHY_AFFINITY:
> +		mask = &rte_flow_item_phy_affinity_mask;
> +		break;
>  	default:
>  		break;
>  	}
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 3e6242803d..3b4e8923dc 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1544,6 +1544,14 @@ Matches Color Marker set by a Meter.
> 
>  - ``color``: Metering color marker.
> 
> +Item: ``PHY_AFFINITY``
> +^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches on the physical affinity of the received packet, the physical port
> +in the group of physical ports connected to a single DPDK port.
> +
> +- ``affinity``: Physical affinity.
> +
>  Actions
>  ~~~~~~~
> 
> diff --git a/doc/guides/rel_notes/release_23_03.rst
> b/doc/guides/rel_notes/release_23_03.rst
> index c15f6fbb9f..a1abd67771 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -69,6 +69,11 @@ New Features
>      ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
>      required for eth_rx, eth_tx, crypto and timer eventdev adapters.
> 
> +* **Added rte_flow support for matching PHY Affinity fields.**
> +
> +  For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> +  Added ``phy_affinity`` item in rte_flow to support physical affinity of
> +  the packets.
> 
>  Removed Items
>  -------------
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0037506a79..1853030e93 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -3712,6 +3712,10 @@ This section lists supported pattern items and
> their attributes, if any.
> 
>    - ``color {value}``: meter color value (green/yellow/red).
> 
> +- ``phy_affinity``: match physical affinity.
> +
> +  - ``affinity {value}``: physical affinity value.
> +
>  - ``send_to_kernel``: send packets to kernel.
> 
> 
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 7d0c24366c..0c2d3b679b 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -157,6 +157,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_item[] = {
>  	MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)),
>  	MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)),
>  	MK_FLOW_ITEM(METER_COLOR, sizeof(struct
> rte_flow_item_meter_color)),
> +	MK_FLOW_ITEM(PHY_AFFINITY, sizeof(struct
> rte_flow_item_phy_affinity)),
>  };
> 
>  /** Generate flow_action[] entry. */
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index b60987db4b..56c04ea37c 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -624,6 +624,13 @@ enum rte_flow_item_type {
>  	 * See struct rte_flow_item_meter_color.
>  	 */
>  	RTE_FLOW_ITEM_TYPE_METER_COLOR,
> +
> +	/**
> +	 * Matches on the physical affinity of the received packet.
> +	 *
> +	 * @see struct rte_flow_item_phy_affinity.
> +	 */
> +	RTE_FLOW_ITEM_TYPE_PHY_AFFINITY,
>  };
> 
>  /**
> @@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color
> rte_flow_item_meter_color_mask = {
>  };
>  #endif
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY
> + *
> + * For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> + * use this item to match the physical affinity of the packets.
> + */
> +struct rte_flow_item_phy_affinity {
> +	uint8_t affinity; /**< physical affinity value. */
> +};
> +
> +/** Default mask for RTE_FLOW_ITEM_TYPE_PHY_AFFINITY. */
> +#ifndef __cplusplus
> +static const struct rte_flow_item_phy_affinity
> +rte_flow_item_phy_affinity_mask = {
> +	.affinity = 0xff,
> +};
> +#endif
> +
>  /**
>   * Action types.
>   *
> --
> 2.18.1

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
@ 2023-01-31 17:26     ` Thomas Monjalon
  2023-02-01  9:45       ` Jiawei(Jonny) Wang
  2023-02-01  9:05     ` Andrew Rybchenko
  1 sibling, 1 reply; 12+ messages in thread
From: Thomas Monjalon @ 2023-01-31 17:26 UTC (permalink / raw)
  To: Jiawei Wang
  Cc: viacheslavo, orika, Aman Singh, Yuying Zhang, Ferruh Yigit,
	Andrew Rybchenko, dev, rasland, david.marchand

30/01/2023 18:00, Jiawei Wang:
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -20,6 +20,11 @@
>  [suppress_file]
>          soname_regexp = ^librte_.*mlx.*glue\.
>  
> +; Ignore fields inserted in middle padding of rte_eth_txconf
> +[suppress_type]
> +        name = rte_eth_txconf
> +        has_data_member_inserted_between = {offset_after(tx_deferred_start), offset_of(offloads)}

You are adding the exception inside
"Core suppression rules: DO NOT TOUCH".

Please move it at the end in the section
"Temporary exceptions till next major ABI version"

Also the rule does not work.
It should be:
	has_data_member_inserted_between = {offset_of(tx_deferred_start), offset_of(offloads)}




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/2] ethdev: add PHY affinity match item
  2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
  2023-01-31 14:36     ` Ori Kam
@ 2023-02-01  8:50     ` Andrew Rybchenko
  2023-02-01 14:59       ` Jiawei(Jonny) Wang
  1 sibling, 1 reply; 12+ messages in thread
From: Andrew Rybchenko @ 2023-02-01  8:50 UTC (permalink / raw)
  To: Jiawei Wang, viacheslavo, orika, thomas, Aman Singh,
	Yuying Zhang, Ferruh Yigit
  Cc: dev, rasland

On 1/30/23 20:00, Jiawei Wang wrote:
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),

Sorry, what is mhpsdp?

> currently, there is no information to indicate the packet belongs to
> which hardware port.
> 
> This patch introduces a new phy affinity item in rte flow API, and

"This patch introduces ..." -> "Introduce ..."
rte -> RTE

> the phy affinity value reflects the physical port of the received packets.
> 
> While uses the phy affinity as a matching item in the flow, and sets the
> same phy_affinity value on the tx queue, then the packet can be sent from

tx -> Tx

> the same hardware port with received.
> 
> This patch also adds the testpmd command line to match the new item:
> 	flow create 0 ingress group 0 pattern phy_affinity affinity is 1 /
> 	end actions queue index 0 / end
> 
> The above command means that creates a flow on a single DPDK port and
> matches the packet from the first physical port (assume the phy affinity 1

Why is it numbered from 1, not 0? Anyway it should be defined
in the documentation below.

> stands for the first port) and redirects these packets into RxQ 0.
> 
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>

[snip]

> diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
> index c15f6fbb9f..a1abd67771 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -69,6 +69,11 @@ New Features
>       ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
>       required for eth_rx, eth_tx, crypto and timer eventdev adapters.
>   
> +* **Added rte_flow support for matching PHY Affinity fields.**

Why "Affinity", not "affinity"?

> +
> +  For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> +  Added ``phy_affinity`` item in rte_flow to support physical affinity of
> +  the packets.

Please, add one more empty line to have two before the next
section.

>   
>   Removed Items
>   -------------

[snip]

> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index b60987db4b..56c04ea37c 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h

> @@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = {
>   };
>   #endif
>   
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
> + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY
> + *
> + * For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> + * use this item to match the physical affinity of the packets.
> + */
> +struct rte_flow_item_phy_affinity {
> +	uint8_t affinity; /**< physical affinity value. */

Sorry, I'd like to know how application should find out which
values may be used here? How many physical ports are behind
this one DPDK ethdev?

Also, please, define which value should be used for the first
port 0 or 1. I'd vote for 0.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
  2023-01-31 17:26     ` Thomas Monjalon
@ 2023-02-01  9:05     ` Andrew Rybchenko
  2023-02-01 15:50       ` Jiawei(Jonny) Wang
  1 sibling, 1 reply; 12+ messages in thread
From: Andrew Rybchenko @ 2023-02-01  9:05 UTC (permalink / raw)
  To: Jiawei Wang, viacheslavo, orika, thomas, Aman Singh,
	Yuying Zhang, Ferruh Yigit
  Cc: dev, rasland

On 1/30/23 20:00, Jiawei Wang wrote:
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> the previous patch introduces the new rte flow item to match the
> phy affinity of the received packets.
> 
> This patch adds the tx_phy_affinity setting in Tx queue API, the affinity

"This patch adds" -> "Add ..."

> value reflects packets be sent to which hardware port.
> Value 0 is no affinity and traffic will be routed between different
> physical ports,

Who will it be routed?

> if 0 is disabled then try to match on phy_affinity 0
> will result in an error.

Why are you talking about matching here?

> 
> Adds the new tx_phy_affinity field into the padding hole of rte_eth_txconf
> structure, the size of rte_eth_txconf keeps the same. Adds a suppress
> type for structure change in the ABI check file.
> 
> This patch adds the testpmd command line:
> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
> 
> For example, there're two hardware ports 0 and 1 connected to
> a single DPDK port (port id 0), and phy_affinity 1 stood for
> hardware port 0 and phy_affinity 2 stood for hardware port 1,
> used the below command to config tx phy affinity for per Tx Queue:
>          port config 0 txq 0 phy_affinity 1
>          port config 0 txq 1 phy_affinity 1
>          port config 0 txq 2 phy_affinity 2
>          port config 0 txq 3 phy_affinity 2
> 
> These commands config the TxQ index 0 and TxQ index 1 with phy affinity 1,
> uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the
> hardware port 0, and similar with hardware port 1 if sending packets
> with TxQ 2 or TxQ 3.

Frankly speaking I dislike it. Why do we need to expose it on
generic ethdev layer? IMHO dynamic mbuf field would be a better
solution to control Tx routing to a specific PHY port.

IMHO, we definitely need dev_info information about a number of
physical ports behind. Advertising value greater than 0 should
mean that PMD supports corresponding mbuf dynamic field to
contol ongoing physical port on Tx (or should just reject
packets on prepare which try to specify outgoing phy port
otherwise). In the same way the information may be provided
on Rx.

I'm OK to have 0 as no phy affinity value and greater than
zero as specified phy affinity. I.e. no dynamic flag is
required.

Also I think that order of patches should be different.
We should start from a patch which provides dev_info and
flow API matching and action should be in later patch.

> 
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>

[snip]


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-01-31 17:26     ` Thomas Monjalon
@ 2023-02-01  9:45       ` Jiawei(Jonny) Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jiawei(Jonny) Wang @ 2023-02-01  9:45 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon (EXTERNAL)
  Cc: Slava Ovsiienko, Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit,
	Andrew Rybchenko, dev, Raslan Darawsheh, david.marchand


> 30/01/2023 18:00, Jiawei Wang:
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -20,6 +20,11 @@
> >  [suppress_file]
> >          soname_regexp = ^librte_.*mlx.*glue\.
> >
> > +; Ignore fields inserted in middle padding of rte_eth_txconf
> > +[suppress_type]
> > +        name = rte_eth_txconf
> > +        has_data_member_inserted_between =
> > +{offset_after(tx_deferred_start), offset_of(offloads)}
> 
> You are adding the exception inside
> "Core suppression rules: DO NOT TOUCH".
> 
> Please move it at the end in the section "Temporary exceptions till next major
> ABI version"
> 

OK, will move.

> Also the rule does not work.
> It should be:
> 	has_data_member_inserted_between = {offset_of(tx_deferred_start),
> offset_of(offloads)}
> 

Thanks, Will change it and send with new version.
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v2 1/2] ethdev: add PHY affinity match item
  2023-02-01  8:50     ` Andrew Rybchenko
@ 2023-02-01 14:59       ` Jiawei(Jonny) Wang
  0 siblings, 0 replies; 12+ messages in thread
From: Jiawei(Jonny) Wang @ 2023-02-01 14:59 UTC (permalink / raw)
  To: Andrew Rybchenko, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Aman Singh, Yuying Zhang, Ferruh Yigit
  Cc: dev, Raslan Darawsheh

Hi,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, February 1, 2023 4:50 PM
> On 1/30/23 20:00, Jiawei Wang wrote:
> > For the multiple hardware ports connect to a single DPDK port
> > (mhpsdp),
> 
> Sorry, what is mhpsdp?
> 

(m)ultiple (h)ardware (p)orts (s)ingle (D)PDK (p)ort.
It's short name for "multiple hardware ports connect to a single DPDK port".

> > currently, there is no information to indicate the packet belongs to
> > which hardware port.
> >
> > This patch introduces a new phy affinity item in rte flow API, and
> 
> "This patch introduces ..." -> "Introduce ..."
> rte -> RTE
> 

OK.
> > the phy affinity value reflects the physical port of the received packets.
> >
> > While uses the phy affinity as a matching item in the flow, and sets
> > the same phy_affinity value on the tx queue, then the packet can be
> > sent from
> 
> tx -> Tx
> 

OK.
> > the same hardware port with received.
> >
> > This patch also adds the testpmd command line to match the new item:
> > 	flow create 0 ingress group 0 pattern phy_affinity affinity is 1 /
> > 	end actions queue index 0 / end
> >
> > The above command means that creates a flow on a single DPDK port and
> > matches the packet from the first physical port (assume the phy
> > affinity 1
> 
> Why is it numbered from 1, not 0? Anyway it should be defined in the
> documentation below.
> 

While uses the phy affinity as a matching item in the flow, and sets the
same phy_affinity value on the tx queue, then the packet can be sent from
the same hardware port with received. 

So, if the Phy affinity 0 is no affinity then the first value should be 1.


> > stands for the first port) and redirects these packets into RxQ 0.
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> 
> [snip]
> 
> > diff --git a/doc/guides/rel_notes/release_23_03.rst
> > b/doc/guides/rel_notes/release_23_03.rst
> > index c15f6fbb9f..a1abd67771 100644
> > --- a/doc/guides/rel_notes/release_23_03.rst
> > +++ b/doc/guides/rel_notes/release_23_03.rst
> > @@ -69,6 +69,11 @@ New Features
> >       ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
> >       required for eth_rx, eth_tx, crypto and timer eventdev adapters.
> >
> > +* **Added rte_flow support for matching PHY Affinity fields.**
> 
> Why "Affinity", not "affinity"?
> 

correct, will update.
> > +
> > +  For the multiple hardware ports connect to a single DPDK port
> > + (mhpsdp),  Added ``phy_affinity`` item in rte_flow to support
> > + physical affinity of  the packets.
> 
> Please, add one more empty line to have two before the next section.
> 
OK.
> >
> >   Removed Items
> >   -------------
> 
> [snip]
> 
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > b60987db4b..56c04ea37c 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> 
> > @@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color
> rte_flow_item_meter_color_mask = {
> >   };
> >   #endif
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY
> > + *
> > + * For the multiple hardware ports connect to a single DPDK port
> > +(mhpsdp),
> > + * use this item to match the physical affinity of the packets.
> > + */
> > +struct rte_flow_item_phy_affinity {
> > +	uint8_t affinity; /**< physical affinity value. */
> 
> Sorry, I'd like to know how application should find out which values may be
> used here? How many physical ports are behind this one DPDK ethdev?
> 

Like Linux bonding scenario, multiple physical port (for example PF1, PF2) can add into bond port as slave role, 
dpdk only probe and attach the bond master port (bond0), so total two phy affinity values. 

PMD can define the phy affinity and mapping the physical port, Or I can document the numbering in RTE level.

> Also, please, define which value should be used for the first port 0 or 1. I'd vote
> for 0.

If need to define the affinity numbering, 
I prefer to use 1 for first port, 0 for reserve and can keep the same value as tx side (second patch introduces the tx_phy_affinity).





^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-02-01  9:05     ` Andrew Rybchenko
@ 2023-02-01 15:50       ` Jiawei(Jonny) Wang
  2023-02-02  9:28         ` Andrew Rybchenko
  0 siblings, 1 reply; 12+ messages in thread
From: Jiawei(Jonny) Wang @ 2023-02-01 15:50 UTC (permalink / raw)
  To: Andrew Rybchenko, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Aman Singh, Yuying Zhang, Ferruh Yigit
  Cc: dev, Raslan Darawsheh


Hi,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue
> API
> 
> On 1/30/23 20:00, Jiawei Wang wrote:
> > For the multiple hardware ports connect to a single DPDK port
> > (mhpsdp), the previous patch introduces the new rte flow item to match
> > the phy affinity of the received packets.
> >
> > This patch adds the tx_phy_affinity setting in Tx queue API, the
> > affinity
> 
> "This patch adds" -> "Add ..."
> 
OK,  will change to 'Add the tx_phy_affinity...."

> > value reflects packets be sent to which hardware port.
> > Value 0 is no affinity and traffic will be routed between different
> > physical ports,
> 
> Who will it be routed?
> 

Assume there's two slave physical port bonded and DPDK attached the bond master bond,
The packets can be sent from first physical port or second physical port, it depends on the PMD
Driver and low level 'routing' selection.

> > if 0 is disabled then try to match on phy_affinity 0 will result in an
> > error.
> 
> Why are you talking about matching here?
> 

Previous patch we mentioned the same phy affinity can be used to handled the packet on same hardware
Port, so if 0 is no affinity then match it should report error.

> >
> > Adds the new tx_phy_affinity field into the padding hole of
> > rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
> > Adds a suppress type for structure change in the ABI check file.
> >
> > This patch adds the testpmd command line:
> > testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
> >
> > For example, there're two hardware ports 0 and 1 connected to a single
> > DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0
> > and phy_affinity 2 stood for hardware port 1, used the below command
> > to config tx phy affinity for per Tx Queue:
> >          port config 0 txq 0 phy_affinity 1
> >          port config 0 txq 1 phy_affinity 1
> >          port config 0 txq 2 phy_affinity 2
> >          port config 0 txq 3 phy_affinity 2
> >
> > These commands config the TxQ index 0 and TxQ index 1 with phy
> > affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be
> > sent from the hardware port 0, and similar with hardware port 1 if
> > sending packets with TxQ 2 or TxQ 3.
> 
> Frankly speaking I dislike it. Why do we need to expose it on generic ethdev
> layer? IMHO dynamic mbuf field would be a better solution to control Tx
> routing to a specific PHY port.
> 

OK, the phy affinity is not part of packet information(like timestamp).
And second, the phy affinity is Queue layer, that is, the phy affinity value 
should keep the same behavior per Queue. 
After the TxQ was created, the packets should be sent the same physical port
If using the same TxQ index.  

> IMHO, we definitely need dev_info information about a number of physical
> ports behind. Advertising value greater than 0 should mean that PMD supports
> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or
> should just reject packets on prepare which try to specify outgoing phy port
> otherwise). In the same way the information may be provided on Rx.
> 

See above, I think phy affinity is Queue level not for each packet.

> I'm OK to have 0 as no phy affinity value and greater than zero as specified phy
> affinity. I.e. no dynamic flag is required.
> 

Thanks for agreement.

> Also I think that order of patches should be different.
> We should start from a patch which provides dev_info and flow API matching
> and action should be in later patch.
>

OK.
 
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> 
> [snip]


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-02-01 15:50       ` Jiawei(Jonny) Wang
@ 2023-02-02  9:28         ` Andrew Rybchenko
  2023-02-02 14:43           ` Thomas Monjalon
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Rybchenko @ 2023-02-02  9:28 UTC (permalink / raw)
  To: Jiawei(Jonny) Wang, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Aman Singh, Yuying Zhang, Ferruh Yigit
  Cc: dev, Raslan Darawsheh

On 2/1/23 18:50, Jiawei(Jonny) Wang wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Subject: Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue
>> API
>>
>> On 1/30/23 20:00, Jiawei Wang wrote:
>>> Adds the new tx_phy_affinity field into the padding hole of
>>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
>>> Adds a suppress type for structure change in the ABI check file.
>>>
>>> This patch adds the testpmd command line:
>>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
>>>
>>> For example, there're two hardware ports 0 and 1 connected to a single
>>> DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0
>>> and phy_affinity 2 stood for hardware port 1, used the below command
>>> to config tx phy affinity for per Tx Queue:
>>>           port config 0 txq 0 phy_affinity 1
>>>           port config 0 txq 1 phy_affinity 1
>>>           port config 0 txq 2 phy_affinity 2
>>>           port config 0 txq 3 phy_affinity 2
>>>
>>> These commands config the TxQ index 0 and TxQ index 1 with phy
>>> affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be
>>> sent from the hardware port 0, and similar with hardware port 1 if
>>> sending packets with TxQ 2 or TxQ 3.
>>
>> Frankly speaking I dislike it. Why do we need to expose it on generic ethdev
>> layer? IMHO dynamic mbuf field would be a better solution to control Tx
>> routing to a specific PHY port.
>>
> 
> OK, the phy affinity is not part of packet information(like timestamp).

Why? port_id is a packet information. Why phy_subport_id is not
a packet information.

> And second, the phy affinity is Queue layer, that is, the phy affinity value
> should keep the same behavior per Queue.
> After the TxQ was created, the packets should be sent the same physical port
> If using the same TxQ index.

Why are these queues should be visible to DPDK application?
Nobody denies you to create many HW queues behind one ethdev
queue. Of course, there questions related to descriptor status
API in this case, but IMHO it would be better than exposing
these details to an application level.

> 
>> IMHO, we definitely need dev_info information about a number of physical
>> ports behind. Advertising value greater than 0 should mean that PMD supports
>> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or
>> should just reject packets on prepare which try to specify outgoing phy port
>> otherwise). In the same way the information may be provided on Rx.
>>
> 
> See above, I think phy affinity is Queue level not for each packet.
> 
>> I'm OK to have 0 as no phy affinity value and greater than zero as specified phy
>> affinity. I.e. no dynamic flag is required.
>>
> 
> Thanks for agreement.
> 
>> Also I think that order of patches should be different.
>> We should start from a patch which provides dev_info and flow API matching
>> and action should be in later patch.
>>
> 
> OK.
>   
>>>
>>> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
>>
>> [snip]
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-02-02  9:28         ` Andrew Rybchenko
@ 2023-02-02 14:43           ` Thomas Monjalon
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Monjalon @ 2023-02-02 14:43 UTC (permalink / raw)
  To: Jiawei(Jonny) Wang, Andrew Rybchenko
  Cc: Slava Ovsiienko, Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit,
	dev, Raslan Darawsheh

02/02/2023 10:28, Andrew Rybchenko:
> On 2/1/23 18:50, Jiawei(Jonny) Wang wrote:
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> On 1/30/23 20:00, Jiawei Wang wrote:
> >>> Adds the new tx_phy_affinity field into the padding hole of
> >>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
> >>> Adds a suppress type for structure change in the ABI check file.
> >>>
> >>> This patch adds the testpmd command line:
> >>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
> >>>
> >>> For example, there're two hardware ports 0 and 1 connected to a single
> >>> DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0
> >>> and phy_affinity 2 stood for hardware port 1, used the below command
> >>> to config tx phy affinity for per Tx Queue:
> >>>           port config 0 txq 0 phy_affinity 1
> >>>           port config 0 txq 1 phy_affinity 1
> >>>           port config 0 txq 2 phy_affinity 2
> >>>           port config 0 txq 3 phy_affinity 2
> >>>
> >>> These commands config the TxQ index 0 and TxQ index 1 with phy
> >>> affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be
> >>> sent from the hardware port 0, and similar with hardware port 1 if
> >>> sending packets with TxQ 2 or TxQ 3.
> >>
> >> Frankly speaking I dislike it. Why do we need to expose it on generic ethdev
> >> layer? IMHO dynamic mbuf field would be a better solution to control Tx
> >> routing to a specific PHY port.

The design of this patch is to map a queue of the front device
with an underlying port.
This design may be applicable to several situations,
including DPDK bonding PMD, or Linux bonding connected to a PMD.

The default 0, meaning the queue is not mapped to anything (no change).
If the affinity is higher than 0, then the queue can be configured as desired.
Then if an application wants to send a packet to a specific underlying port,
it just has to send to the right queue.

Functionnaly, mapping the queue, or setting the port in mbuf (your proposal)
are the same.
The advantages of the queue mapping are:
	- faster to use a queue than filling mbuf field
	- optimization can be done at queue setup

[...]
> Why are these queues should be visible to DPDK application?
> Nobody denies you to create many HW queues behind one ethdev
> queue. Of course, there questions related to descriptor status
> API in this case, but IMHO it would be better than exposing
> these details to an application level.

Why not mapping the queues if application requires these details?

> >> IMHO, we definitely need dev_info information about a number of physical
> >> ports behind.

Yes dev_info would be needed.

> >> Advertising value greater than 0 should mean that PMD supports
> >> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or
> >> should just reject packets on prepare which try to specify outgoing phy port
> >> otherwise). In the same way the information may be provided on Rx.
> > 
> > See above, I think phy affinity is Queue level not for each packet.
> > 
> >> I'm OK to have 0 as no phy affinity value and greater than zero as specified phy
> >> affinity. I.e. no dynamic flag is required.
> > 
> > Thanks for agreement.
> > 
> >> Also I think that order of patches should be different.
> >> We should start from a patch which provides dev_info and flow API matching
> >> and action should be in later patch.
> > 
> > OK.




^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-02-02 14:43 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/>
2023-01-30 17:00 ` [PATCH v2 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
2023-01-31 14:36     ` Ori Kam
2023-02-01  8:50     ` Andrew Rybchenko
2023-02-01 14:59       ` Jiawei(Jonny) Wang
2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
2023-01-31 17:26     ` Thomas Monjalon
2023-02-01  9:45       ` Jiawei(Jonny) Wang
2023-02-01  9:05     ` Andrew Rybchenko
2023-02-01 15:50       ` Jiawei(Jonny) Wang
2023-02-02  9:28         ` Andrew Rybchenko
2023-02-02 14:43           ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).