* [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature
@ 2019-11-05 8:01 Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 01/20] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
` (22 more replies)
0 siblings, 23 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The modern networks operate on the base of the packet switching
approach, and in-network environment data are transmitted as the
packets. Within the host besides the data, actually transmitted
on the wire as packets, there might some out-of-band data helping
to process packets. These data are named as metadata, exist on
a per-packet basis and are attached to each packet as some extra
dedicated storage (in meaning it besides the packet data itself).
In the DPDK network data are represented as mbuf structure chains
and go along the application/DPDK datapath. From the other side,
DPDK provides Flow API to control the flow engine. Being precise,
there are two kinds of metadata in the DPDK, the one is purely
software metadata (as fields of mbuf - flags, packet types, data
length, etc.), and the other is metadata within flow engine.
In this scope, we cover the second type (flow engine metadata) only.
The flow engine metadata is some extra data, supported on the
per-packet basis and usually handled by hardware inside flow
engine.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK
action stores some specified value in the metadata storage, and,
on the packet receiving PMD puts the flag and value to the mbuf
and applications can see the packet was threated inside flow engine
according to the appropriate RTE flow(s). MARK and FLAG are like
some kind of gateway to transfer some per-packet information from
the flow engine to the application via receiving datapath. Also,
there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It
allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other
flows.
From the datapath point of view, the MARK and FLAG are related
to the receiving side only. It would useful to have the same gateway
on the transmitting side and there was the feature of type
RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill
the field in mbuf and this value will be transferred to some field
in the packet metadata inside the flow engine.
It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata
inside the flow engine and gateways to exchange these values on
a per-packet basis via datapath.
As we can see, the MARK and META means are not symmetric, there
is absent action which would allow us to set META value on the
transmitting path. So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META is proposed.
The next, applications raise the new requirements for packet
metadata. The flow engines are getting more complex, internal
switches are introduced, multiple ports might be supported within
the same flow engine namespace. From the DPDK points of view, it
means the packets might be sent on one eth_dev port and received
on the other one, and the packet path inside the flow engine
entirely belongs to the same hardware device. The simplest example
is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to
transfer some extra data from one port to another one, besides
the packet data itself. And applications would like to use this
opportunity.
Improving the metadata definitions it is proposed to:
- suppose MARK and META metadata fields not shared, dedicated
- extend applying area for MARK and META items/actions for all
flow engine domains - transmitting and receiving
- allow MARK and META metadata to be preserved while crossing
the flow domains (from transmit origin through flow database
inside (E-)switch to receiving side domain), in simple words,
to allow metadata to convey the packet thought entire flow
engine space.
Another new proposed feature is transient per-packet storage
inside the flow engine. It might have a lot of use cases.
For example, if there is VXLAN tunneled traffic and some flow
performs VXLAN decapsulation and wishes to save information
regarding the dropped header it could use this temporary
transient storage. The tools to maintain this storage are
traditional (for DPDK rte_flow API):
- RTE_FLOW_ACTION_TYPE_SET_TAG - to set value
- RTE_FLOW_ACTION_TYPE_SET_ITEM - to match on
There are primary properties of the proposed storage:
- the storage is presented as an array of 32-bit opaque values
- the size of array (or even bitmap of available indices) is
vendor specific and is subject to run-time trial
- it is transient, it means it exists only inside flow engine,
no gateways for interacting with datapath, applications have
way neither to specify these data on transmitting nor to get
these data on receiving
This patchset implements the abovementioned extensive metadata
feature in the mlx5 PMD.
The patchset must be applied after public RTE API updates:
[1] http://patches.dpdk.org/patch/62354/
[2] http://patches.dpdk.org/patch/62355/
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Viacheslav Ovsiienko (20):
net/mlx5: convert internal tag endianness
net/mlx5: update modify header action translator
net/mlx5: add metadata register copy
net/mlx5: refactor flow structure
net/mlx5: update flow functions
net/mlx5: update meta register matcher set
net/mlx5: rename structure and function
net/mlx5: check metadata registers availability
net/mlx5: add devarg for extensive metadata support
net/mlx5: adjust shared register according to mask
net/mlx5: check the maximal modify actions number
net/mlx5: update metadata register id query
net/mlx5: add flow tag support
net/mlx5: extend flow mark support
net/mlx5: extend flow meta data support
net/mlx5: add meta data support to Rx datapath
net/mlx5: add simple hash table
net/mlx5: introduce flow splitters chain
net/mlx5: split Rx flows to provide metadata copy
net/mlx5: add metadata register copy table
doc/guides/nics/mlx5.rst | 49 +
drivers/net/mlx5/mlx5.c | 135 ++-
drivers/net/mlx5/mlx5.h | 19 +-
drivers/net/mlx5/mlx5_defs.h | 7 +
drivers/net/mlx5/mlx5_ethdev.c | 8 +-
drivers/net/mlx5/mlx5_flow.c | 1178 ++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 108 ++-
drivers/net/mlx5/mlx5_flow_dv.c | 1544 ++++++++++++++++++++++++------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 +-
drivers/net/mlx5/mlx5_prm.h | 45 +-
drivers/net/mlx5/mlx5_rxtx.c | 6 +
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +-
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +-
drivers/net/mlx5/mlx5_utils.h | 115 ++-
15 files changed, 2922 insertions(+), 422 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 01/20] net/mlx5: convert internal tag endianness
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 02/20] net/mlx5: update modify header action translator Viacheslav Ovsiienko
` (21 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
Public API RTE_FLOW_ACTION_TYPE_TAG and RTE_FLOW_ITEM_TYPE_TAG
present data in host-endian format, as all metadata related
entities. The internal mlx5 tag related action and item should
use the same endianness to be conformed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 6 +++---
drivers/net/mlx5/mlx5_flow.h | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b4b08f4..5408797 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2707,7 +2707,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
actions_rx++;
set_tag = (void *)actions_rx;
set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
- set_tag->data = rte_cpu_to_be_32(*flow_id);
+ set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action));
@@ -2715,8 +2715,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item = pattern_tx;
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
- tag_item->data = rte_cpu_to_be_32(*flow_id);
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, &error);
+ tag_item->data = *flow_id;
+ tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7559810..8cc6c47 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -56,13 +56,13 @@ enum mlx5_rte_flow_action_type {
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Matches on source queue. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 02/20] net/mlx5: update modify header action translator
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 01/20] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 03/20] net/mlx5: add metadata register copy Viacheslav Ovsiienko
` (20 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
When composing device command for modify header action, provided mask
should be taken more accurate into account thus length and offset
in action should be set accordingly at precise bit-wise boundaries.
For the future use, metadata register copy action is also added.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 148 ++++++++++++++++++++++++++++++----------
drivers/net/mlx5/mlx5_prm.h | 18 +++--
2 files changed, 126 insertions(+), 40 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 42c265f..3df2609 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -240,12 +240,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Fetch 1, 2, 3 or 4 byte field from the byte array
+ * and return as unsigned integer in host-endian format.
+ *
+ * @param[in] data
+ * Pointer to data array.
+ * @param[in] size
+ * Size of field to extract.
+ *
+ * @return
+ * converted field in host endian format.
+ */
+static inline uint32_t
+flow_dv_fetch_field(const uint8_t *data, uint32_t size)
+{
+ uint32_t ret;
+
+ switch (size) {
+ case 1:
+ ret = *data;
+ break;
+ case 2:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ break;
+ case 3:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ ret = (ret << 8) | *(data + sizeof(uint16_t));
+ break;
+ case 4:
+ ret = rte_be_to_cpu_32(*(const unaligned_uint32_t *)data);
+ break;
+ default:
+ assert(false);
+ ret = 0;
+ break;
+ }
+ return ret;
+}
+
+/**
* Convert modify-header action to DV specification.
*
+ * Data length of each action is determined by provided field description
+ * and the item mask. Data bit offset and width of each action is determined
+ * by provided item mask.
+ *
* @param[in] item
* Pointer to item specification.
* @param[in] field
* Pointer to field modification information.
+ * For MLX5_MODIFICATION_TYPE_SET specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_ADD specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_COPY specifies source field.
+ * @param[in] dcopy
+ * Destination field info for MLX5_MODIFICATION_TYPE_COPY in @type.
+ * Negative offset value sets the same offset as source offset.
+ * size field is ignored, value is taken from source field.
* @param[in,out] resource
* Pointer to the modify-header resource.
* @param[in] type
@@ -259,38 +309,66 @@ struct field_modify_info modify_tcp[] = {
static int
flow_dv_convert_modify_action(struct rte_flow_item *item,
struct field_modify_info *field,
+ struct field_modify_info *dcopy,
struct mlx5_flow_dv_modify_hdr_resource *resource,
- uint32_t type,
- struct rte_flow_error *error)
+ uint32_t type, struct rte_flow_error *error)
{
uint32_t i = resource->actions_num;
struct mlx5_modification_cmd *actions = resource->actions;
- const uint8_t *spec = item->spec;
- const uint8_t *mask = item->mask;
- uint32_t set;
-
- while (field->size) {
- set = 0;
- /* Generate modify command for each mask segment. */
- memcpy(&set, &mask[field->offset], field->size);
- if (set) {
- if (i >= MLX5_MODIFY_NUM)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "too many items to modify");
- actions[i].action_type = type;
- actions[i].field = field->id;
- actions[i].length = field->size ==
- 4 ? 0 : field->size * 8;
- rte_memcpy(&actions[i].data[4 - field->size],
- &spec[field->offset], field->size);
- actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- ++i;
+
+ /*
+ * The item and mask are provided in big-endian format.
+ * The fields should be presented as in big-endian format either.
+ * Mask must be always present, it defines the actual field width.
+ */
+ assert(item->mask);
+ assert(field->size);
+ do {
+ unsigned int size_b;
+ unsigned int off_b;
+ uint32_t mask;
+ uint32_t data;
+
+ if (i >= MLX5_MODIFY_NUM)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many items to modify");
+ /* Fetch variable byte size mask from the array. */
+ mask = flow_dv_fetch_field((const uint8_t *)item->mask +
+ field->offset, field->size);
+ if (!mask)
+ continue;
+ /* Deduce actual data width in bits from mask value. */
+ off_b = rte_bsf32(mask);
+ size_b = sizeof(uint32_t) * CHAR_BIT -
+ off_b - __builtin_clz(mask);
+ assert(size_b);
+ size_b = size_b == sizeof(uint32_t) * CHAR_BIT ? 0 : size_b;
+ actions[i].action_type = type;
+ actions[i].field = field->id;
+ actions[i].offset = off_b;
+ actions[i].length = size_b;
+ /* Convert entire record to expected big-endian format. */
+ actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
+ if (type == MLX5_MODIFICATION_TYPE_COPY) {
+ assert(dcopy);
+ actions[i].dst_field = dcopy->id;
+ actions[i].dst_offset =
+ (int)dcopy->offset < 0 ? off_b : dcopy->offset;
+ /* Convert entire record to big-endian format. */
+ actions[i].data1 = rte_cpu_to_be_32(actions[i].data1);
+ } else {
+ assert(item->spec);
+ data = flow_dv_fetch_field((const uint8_t *)item->spec +
+ field->offset, field->size);
+ /* Shift out the trailing masked bits from data. */
+ data = (data & mask) >> off_b;
+ actions[i].data1 = rte_cpu_to_be_32(data);
}
- if (resource->actions_num != i)
- resource->actions_num = i;
- field++;
- }
+ ++i;
+ ++field;
+ } while (field->size);
+ resource->actions_num = i;
if (!resource->actions_num)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -334,7 +412,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv4;
item.mask = &ipv4_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv4, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv4, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -380,7 +458,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv6;
item.mask = &ipv6_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv6, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv6, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -426,7 +504,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = ð
item.mask = ð_mask;
- return flow_dv_convert_modify_action(&item, modify_eth, resource,
+ return flow_dv_convert_modify_action(&item, modify_eth, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -540,7 +618,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &tcp_mask;
field = modify_tcp;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -600,7 +678,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -657,7 +735,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -702,7 +780,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -747,7 +825,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 96b9166..b9e53f5 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -383,11 +383,12 @@ struct mlx5_cqe {
/* CQE format value. */
#define MLX5_COMPRESSED 0x3
-/* Write a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_SET 1
-
-/* Add a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_ADD 2
+/* Action type of header modification. */
+enum {
+ MLX5_MODIFICATION_TYPE_SET = 0x1,
+ MLX5_MODIFICATION_TYPE_ADD = 0x2,
+ MLX5_MODIFICATION_TYPE_COPY = 0x3,
+};
/* The field of packet to be modified. */
enum mlx5_modification_field {
@@ -470,6 +471,13 @@ struct mlx5_modification_cmd {
union {
uint32_t data1;
uint8_t data[4];
+ struct {
+ unsigned int rsvd2:8;
+ unsigned int dst_offset:5;
+ unsigned int rsvd3:3;
+ unsigned int dst_field:12;
+ unsigned int rsvd4:4;
+ };
};
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 03/20] net/mlx5: add metadata register copy
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 01/20] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 02/20] net/mlx5: update modify header action translator Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 04/20] net/mlx5: refactor flow structure Viacheslav Ovsiienko
` (19 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add flow metadata register copy action which is supported through modify
header command. As it is an internal action, not exposed to users, item
type (MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG) is negative value. This can be
used when creating PMD internal subflows.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 13 +++++++----
drivers/net/mlx5/mlx5_flow_dv.c | 50 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8cc6c47..170192d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -47,24 +47,30 @@ enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE,
};
-/* Private rte flow actions. */
+/* Private (internal) rte flow actions. */
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
+struct mlx5_flow_action_copy_mreg {
+ enum modify_reg dst;
+ enum modify_reg src;
+};
+
/* Matches on source queue. */
struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
@@ -227,7 +233,6 @@ struct mlx5_rte_flow_item_tx_queue {
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
-
#ifndef IPPROTO_MPLS
#define IPPROTO_MPLS 137
#endif
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3df2609..930f088 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -861,7 +861,7 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
- const struct mlx5_rte_flow_action_set_tag *conf = (action->conf);
+ const struct mlx5_rte_flow_action_set_tag *conf = action->conf;
struct mlx5_modification_cmd *actions = resource->actions;
uint32_t i = resource->actions_num;
@@ -883,6 +883,47 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert internal COPY_REG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] res
+ * Pointer to the modify-header resource.
+ * @param[in] action
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+ struct mlx5_flow_dv_modify_hdr_resource *res,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_action_copy_mreg *conf = action->conf;
+ uint32_t mask = RTE_BE32(UINT32_MAX);
+ struct rte_flow_item item = {
+ .spec = NULL,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_src[] = {
+ {4, 0, reg_to_field[conf->src]},
+ {0, 0, 0},
+ };
+ struct field_modify_info reg_dst = {
+ .offset = (uint32_t)-1, /* Same as src. */
+ .id = reg_to_field[conf->dst],
+ };
+ return flow_dv_convert_modify_action(&item,
+ reg_src, ®_dst, res,
+ MLX5_MODIFICATION_TYPE_COPY,
+ error);
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -3949,6 +3990,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -5945,6 +5987,12 @@ struct field_modify_info modify_tcp[] = {
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
+ if (flow_dv_convert_action_copy_mreg(dev, &res,
+ actions, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 04/20] net/mlx5: refactor flow structure
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (2 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 03/20] net/mlx5: add metadata register copy Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 05/20] net/mlx5: update flow functions Viacheslav Ovsiienko
` (18 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Some rte_flow fields which are local to subflows have been moved to
mlx5_flow structure. RSS attributes are grouped by mlx5_flow_rss structure.
tag_resource is moved to mlx5_flow_dv structure.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 18 +++++---
drivers/net/mlx5/mlx5_flow.h | 25 ++++++-----
drivers/net/mlx5/mlx5_flow_dv.c | 89 ++++++++++++++++++++------------------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 ++++++++++++-----------
4 files changed, 105 insertions(+), 82 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5408797..d1661f2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -612,7 +612,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
unsigned int i;
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -676,7 +676,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
assert(dev->data->dev_started);
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -2815,13 +2815,20 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
goto error_before_flow;
}
flow->drv_type = flow_get_drv_type(dev, attr);
- flow->ingress = attr->ingress;
- flow->transfer = attr->transfer;
if (hairpin_id != 0)
flow->hairpin_flow_id = hairpin_id;
assert(flow->drv_type > MLX5_FLOW_TYPE_MIN &&
flow->drv_type < MLX5_FLOW_TYPE_MAX);
- flow->queue = (void *)(flow + 1);
+ flow->rss.queue = (void *)(flow + 1);
+ if (rss) {
+ /*
+ * The following information is required by
+ * mlx5_flow_hashfields_adjust() in advance.
+ */
+ flow->rss.level = rss->level;
+ /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
+ flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
+ }
LIST_INIT(&flow->dev_flows);
if (rss && rss->types) {
unsigned int graph_root;
@@ -2861,6 +2868,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (!dev_flow)
goto error;
dev_flow->flow = flow;
+ dev_flow->external = 0;
LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
ret = flow_drv_translate(dev, dev_flow, &attr_tx,
items_tx.items,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 170192d..b9a9507 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -417,7 +417,6 @@ struct mlx5_flow_dv_push_vlan_action_resource {
/* DV flows structure. */
struct mlx5_flow_dv {
- uint64_t hash_fields; /**< Fields that participate in the hash. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queues. */
/* Flow DV api: */
struct mlx5_flow_dv_matcher *matcher; /**< Cache to matcher. */
@@ -436,6 +435,8 @@ struct mlx5_flow_dv {
/**< Structure for VF VLAN workaround. */
struct mlx5_flow_dv_push_vlan_action_resource *push_vlan_res;
/**< Pointer to push VLAN action resource in cache. */
+ struct mlx5_flow_dv_tag_resource *tag_resource;
+ /**< pointer to the tag action. */
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
void *actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS];
/**< Action list. */
@@ -460,11 +461,18 @@ struct mlx5_flow_verbs {
};
struct ibv_flow *flow; /**< Verbs flow pointer. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queue object. */
- uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
struct mlx5_vf_vlan vf_vlan;
/**< Structure for VF VLAN workaround. */
};
+struct mlx5_flow_rss {
+ uint32_t level;
+ uint32_t queue_num; /**< Number of entries in @p queue. */
+ uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
+ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+};
+
/** Device flow structure. */
struct mlx5_flow {
LIST_ENTRY(mlx5_flow) next;
@@ -473,6 +481,10 @@ struct mlx5_flow {
/**< Bit-fields of present layers, see MLX5_FLOW_LAYER_*. */
uint64_t actions;
/**< Bit-fields of detected actions, see MLX5_FLOW_ACTION_*. */
+ uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
+ uint8_t ingress; /**< 1 if the flow is ingress. */
+ uint32_t group; /**< The group index. */
+ uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
union {
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
struct mlx5_flow_dv dv;
@@ -486,18 +498,11 @@ struct mlx5_flow {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
+ struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
- struct mlx5_flow_dv_tag_resource *tag_resource;
- /**< pointer to the tag action. */
- struct rte_flow_action_rss rss;/**< RSS context. */
- uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
- uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
- uint8_t ingress; /**< 1 if the flow is ingress. */
- uint32_t group; /**< The group index. */
- uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 930f088..019c9b3 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1583,10 +1583,9 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
struct mlx5_flow_dv_encap_decap_resource *cache_resource;
- struct rte_flow *flow = dev_flow->flow;
struct mlx5dv_dr_domain *domain;
- resource->flags = flow->group ? 0 : 1;
+ resource->flags = dev_flow->group ? 0 : 1;
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
domain = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX)
@@ -2745,7 +2744,7 @@ struct field_modify_info modify_tcp[] = {
else
ns = sh->rx_domain;
resource->flags =
- dev_flow->flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
+ dev_flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
/* Lookup a matching resource from cache. */
LIST_FOREACH(cache_resource, &sh->modify_cmds, next) {
if (resource->ft_type == cache_resource->ft_type &&
@@ -4066,18 +4065,20 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action actions[] __rte_unused,
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow);
+ struct mlx5_flow *dev_flow;
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
- return flow;
+ dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
#ifndef NDEBUG
@@ -5458,7 +5459,7 @@ struct field_modify_info modify_tcp[] = {
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
rte_atomic32_inc(&cache_resource->refcnt);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
return 0;
}
}
@@ -5480,7 +5481,7 @@ struct field_modify_info modify_tcp[] = {
rte_atomic32_init(&cache_resource->refcnt);
rte_atomic32_inc(&cache_resource->refcnt);
LIST_INSERT_HEAD(&sh->tags, cache_resource, next);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
DRV_LOG(DEBUG, "new tag resource %p: refcnt %d++",
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
@@ -5660,7 +5661,7 @@ struct field_modify_info modify_tcp[] = {
&table, error);
if (ret)
return ret;
- flow->group = table;
+ dev_flow->group = table;
if (attr->transfer)
res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
@@ -5697,47 +5698,50 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_FLAG:
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ assert(flow->rss.queue);
queue = actions->conf;
flow->rss.queue_num = 1;
- (*flow->queue)[0] = queue->index;
+ (*flow->rss.queue)[0] = queue->index;
action_flags |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
+ assert(flow->rss.queue);
rss = actions->conf;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance
+ * when expanding items for RSS.
+ */
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
@@ -5748,7 +5752,7 @@ struct field_modify_info modify_tcp[] = {
flow->counter = flow_dv_counter_alloc(dev,
count->shared,
count->id,
- flow->group);
+ dev_flow->group);
if (flow->counter == NULL)
goto cnt_err;
dev_flow->dv.actions[actions_n++] =
@@ -6046,9 +6050,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv4(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -6073,9 +6078,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv6(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -6100,7 +6106,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_tcp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
IBV_RX_HASH_SRC_PORT_TCP |
@@ -6112,7 +6118,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_udp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
IBV_RX_HASH_SRC_PORT_UDP |
@@ -6208,7 +6214,7 @@ struct field_modify_info modify_tcp[] = {
matcher.priority = mlx5_flow_adjust_priority(dev, priority,
matcher.priority);
matcher.egress = attr->egress;
- matcher.group = flow->group;
+ matcher.group = dev_flow->group;
matcher.transfer = attr->transfer;
if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
return -rte_errno;
@@ -6242,7 +6248,7 @@ struct field_modify_info modify_tcp[] = {
dv = &dev_flow->dv;
n = dv->actions_n;
if (dev_flow->actions & MLX5_FLOW_ACTION_DROP) {
- if (flow->transfer) {
+ if (dev_flow->transfer) {
dv->actions[n++] = priv->sh->esw_drop_action;
} else {
dv->hrxq = mlx5_hrxq_drop_new(dev);
@@ -6260,15 +6266,18 @@ struct field_modify_info modify_tcp[] = {
(MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS)) {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq) {
hrxq = mlx5_hrxq_new
- (dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields, (*flow->queue),
+ (dev, flow->rss.key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
@@ -6578,10 +6587,6 @@ struct field_modify_info modify_tcp[] = {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
}
- if (flow->tag_resource) {
- flow_dv_tag_release(dev, flow->tag_resource);
- flow->tag_resource = NULL;
- }
while (!LIST_EMPTY(&flow->dev_flows)) {
dev_flow = LIST_FIRST(&flow->dev_flows);
LIST_REMOVE(dev_flow, next);
@@ -6597,6 +6602,8 @@ struct field_modify_info modify_tcp[] = {
flow_dv_port_id_action_resource_release(dev_flow);
if (dev_flow->dv.push_vlan_res)
flow_dv_push_vlan_action_resource_release(dev_flow);
+ if (dev_flow->dv.tag_resource)
+ flow_dv_tag_release(dev, dev_flow->dv.tag_resource);
rte_free(dev_flow);
}
}
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fd27f6c..3ab73c2 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -864,8 +864,8 @@
const struct rte_flow_action_queue *queue = action->conf;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- (*flow->queue)[0] = queue->index;
+ if (flow->rss.queue)
+ (*flow->rss.queue)[0] = queue->index;
flow->rss.queue_num = 1;
}
@@ -889,16 +889,17 @@
const uint8_t *rss_key;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance when expanding
+ * items for RSS.
+ */
}
/**
@@ -1365,22 +1366,23 @@
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
+ struct mlx5_flow *dev_flow;
size += flow_verbs_get_actions_size(actions);
size += flow_verbs_get_items_size(items);
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->verbs.attr = (void *)(flow + 1);
- flow->verbs.specs =
- (uint8_t *)(flow + 1) + sizeof(struct ibv_flow_attr);
- return flow;
+ dev_flow->verbs.attr = (void *)(dev_flow + 1);
+ dev_flow->verbs.specs = (void *)(dev_flow->verbs.attr + 1);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
/**
@@ -1486,7 +1488,7 @@
flow_verbs_translate_item_ipv4(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -1498,7 +1500,7 @@
flow_verbs_translate_item_ipv6(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -1510,7 +1512,7 @@
flow_verbs_translate_item_tcp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
@@ -1522,7 +1524,7 @@
flow_verbs_translate_item_udp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
@@ -1667,16 +1669,17 @@
} else {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq)
- hrxq = mlx5_hrxq_new(dev, flow->key,
+ hrxq = mlx5_hrxq_new(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 05/20] net/mlx5: update flow functions
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (3 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 04/20] net/mlx5: refactor flow structure Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 06/20] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
` (17 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Update flow creation/destroy functions for future reuse.
List operations can be skipped inside functions and done
separately out of flow creation.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d1661f2..6e6c845 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2736,7 +2736,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * no list insertion occurred, flow is just created,
+ * this is caller's responsibility to track the
+ * created flow.
* @param[in] attr
* Flow rule attributes.
* @param[in] items
@@ -2881,7 +2884,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
- TAILQ_INSERT_TAIL(list, flow, next);
+ if (list)
+ TAILQ_INSERT_TAIL(list, flow, next);
flow_rxq_flags_set(dev, flow);
return flow;
error_before_flow:
@@ -2975,7 +2979,8 @@ struct rte_flow *
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * there is no flow removal from the list.
* @param[in] flow
* Flow to destroy.
*/
@@ -2995,7 +3000,8 @@ struct rte_flow *
mlx5_flow_id_release(priv->sh->flow_id_pool,
flow->hairpin_flow_id);
flow_drv_destroy(dev, flow);
- TAILQ_REMOVE(list, flow, next);
+ if (list)
+ TAILQ_REMOVE(list, flow, next);
rte_free(flow->fdir);
rte_free(flow);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 06/20] net/mlx5: update meta register matcher set
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (4 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 05/20] net/mlx5: update flow functions Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 07/20] net/mlx5: rename structure and function Viacheslav Ovsiienko
` (16 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Introduce the dedicated matcher register field setup routine.
Update the code to use this unified one.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 171 +++++++++++++++++++---------------------
1 file changed, 82 insertions(+), 89 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 019c9b3..eb7e481 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4903,6 +4903,78 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add metadata register item to matcher
+ *
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] reg_type
+ * Type of device metadata register
+ * @param[in] value
+ * Register value
+ * @param[in] mask
+ * Register mask
+ */
+static void
+flow_dv_match_meta_reg(void *matcher, void *key,
+ enum modify_reg reg_type,
+ uint32_t data, uint32_t mask)
+{
+ void *misc2_m =
+ MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
+ void *misc2_v =
+ MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
+
+ data &= mask;
+ switch (reg_type) {
+ case REG_A:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data);
+ break;
+ case REG_B:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data);
+ break;
+ case REG_C_0:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, data);
+ break;
+ case REG_C_1:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data);
+ break;
+ case REG_C_2:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data);
+ break;
+ case REG_C_3:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data);
+ break;
+ case REG_C_4:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data);
+ break;
+ case REG_C_5:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data);
+ break;
+ case REG_C_6:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data);
+ break;
+ case REG_C_7:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data);
+ break;
+ default:
+ assert(false);
+ break;
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -4920,21 +4992,15 @@ struct field_modify_info modify_tcp[] = {
{
const struct rte_flow_item_meta *meta_m;
const struct rte_flow_item_meta *meta_v;
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
meta_m = (const void *)item->mask;
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v) {
- MLX5_SET(fte_match_set_misc2, misc2_m,
- metadata_reg_a, meta_m->data);
- MLX5_SET(fte_match_set_misc2, misc2_v,
- metadata_reg_a, meta_v->data & meta_m->data);
- }
+ if (meta_v)
+ flow_dv_match_meta_reg(matcher, key, REG_A,
+ rte_cpu_to_be_32(meta_v->data),
+ rte_cpu_to_be_32(meta_m->data));
}
/**
@@ -4951,13 +5017,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_meta_vport(void *matcher, void *key,
uint32_t value, uint32_t mask)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
-
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, value);
+ flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask);
}
/**
@@ -4971,81 +5031,14 @@ struct field_modify_info modify_tcp[] = {
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_tag(void *matcher, void *key,
- const struct rte_flow_item *item)
+flow_dv_translate_mlx5_item_tag(void *matcher, void *key,
+ const struct rte_flow_item *item)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
enum modify_reg reg = tag_v->id;
- rte_be32_t value = tag_v->data;
- rte_be32_t mask = tag_m->data;
- switch (reg) {
- case REG_A:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a,
- rte_be_to_cpu_32(value));
- break;
- case REG_B:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_0:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_1:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_2:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_3:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_4:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_5:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_6:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_7:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7,
- rte_be_to_cpu_32(value));
- break;
- }
+ flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
/**
@@ -6177,8 +6170,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
- flow_dv_translate_item_tag(match_mask, match_value,
- items);
+ flow_dv_translate_mlx5_item_tag(match_mask,
+ match_value, items);
last_item = MLX5_FLOW_ITEM_TAG;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 07/20] net/mlx5: rename structure and function
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (5 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 06/20] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 08/20] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
` (15 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
There are some renaming:
- in the DV flow engine overall: flow_d_* -> flow_dv_*
- in flow_dv_translate(): res -> mhdr_res
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 151 ++++++++++++++++++++--------------------
1 file changed, 76 insertions(+), 75 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index eb7e481..853e879 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -183,7 +183,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to the rte_eth_dev structure.
*/
static void
-flow_d_shared_lock(struct rte_eth_dev *dev)
+flow_dv_shared_lock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -198,7 +198,7 @@ struct field_modify_info modify_tcp[] = {
}
static void
-flow_d_shared_unlock(struct rte_eth_dev *dev)
+flow_dv_shared_unlock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -5597,7 +5597,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Fill the flow with DV spec.
+ * Fill the flow with DV spec, lock free
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to rte_eth_dev structure.
@@ -5616,12 +5617,12 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+__flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow *flow = dev_flow->flow;
@@ -5636,7 +5637,7 @@ struct field_modify_info modify_tcp[] = {
};
int actions_n = 0;
bool actions_end = false;
- struct mlx5_flow_dv_modify_hdr_resource res = {
+ struct mlx5_flow_dv_modify_hdr_resource mhdr_res = {
.ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX
};
@@ -5656,7 +5657,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
dev_flow->group = table;
if (attr->transfer)
- res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
+ mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; !actions_end ; actions++) {
@@ -5805,7 +5806,7 @@ struct field_modify_info modify_tcp[] = {
mlx5_update_vlan_vid_pcp(actions, &vlan);
/* If no VLAN push - this is a modify header action */
if (flow_dv_convert_action_modify_vlan_vid
- (&res, actions, error))
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
break;
@@ -5904,8 +5905,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
- if (flow_dv_convert_action_modify_mac(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_mac
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ?
@@ -5914,8 +5915,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
- if (flow_dv_convert_action_modify_ipv4(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv4
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ?
@@ -5924,8 +5925,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
- if (flow_dv_convert_action_modify_ipv6(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv6
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ?
@@ -5934,9 +5935,9 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
- if (flow_dv_convert_action_modify_tp(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_tp
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_TP_SRC ?
@@ -5944,23 +5945,22 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_SET_TP_DST;
break;
case RTE_FLOW_ACTION_TYPE_DEC_TTL:
- if (flow_dv_convert_action_modify_dec_ttl(&res, items,
- &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_dec_ttl
+ (&mhdr_res, items, &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_DEC_TTL;
break;
case RTE_FLOW_ACTION_TYPE_SET_TTL:
- if (flow_dv_convert_action_modify_ttl(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_ttl
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TTL;
break;
case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ:
- if (flow_dv_convert_action_modify_tcp_seq(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_seq
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ?
@@ -5970,8 +5970,8 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK:
- if (flow_dv_convert_action_modify_tcp_ack(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_ack
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ?
@@ -5979,14 +5979,14 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
- if (flow_dv_convert_action_set_reg(&res, actions,
- error))
+ if (flow_dv_convert_action_set_reg
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
- if (flow_dv_convert_action_copy_mreg(dev, &res,
- actions, error))
+ if (flow_dv_convert_action_copy_mreg
+ (dev, &mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
@@ -5995,9 +5995,7 @@ struct field_modify_info modify_tcp[] = {
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
- (dev, &res,
- dev_flow,
- error))
+ (dev, &mhdr_res, dev_flow, error))
return -rte_errno;
dev_flow->dv.actions[modify_action_position] =
dev_flow->dv.modify_hdr->verbs_action;
@@ -6215,7 +6213,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Apply the flow to the NIC.
+ * Apply the flow to the NIC, lock free,
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6228,8 +6227,8 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
- struct rte_flow_error *error)
+__flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6527,6 +6526,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC but keeps it in memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to Ethernet device.
@@ -6534,7 +6534,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6562,6 +6562,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC and the memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6569,13 +6570,13 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow *dev_flow;
if (!flow)
return;
- flow_dv_remove(dev, flow);
+ __flow_dv_remove(dev, flow);
if (flow->counter) {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
@@ -6686,69 +6687,69 @@ struct field_modify_info modify_tcp[] = {
}
/*
- * Mutex-protected thunk to flow_dv_translate().
+ * Mutex-protected thunk to lock-free __flow_dv_translate().
*/
static int
-flow_d_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_translate(dev, dev_flow, attr, items, actions, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_translate(dev, dev_flow, attr, items, actions, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_apply().
+ * Mutex-protected thunk to lock-free __flow_dv_apply().
*/
static int
-flow_d_apply(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
+flow_dv_apply(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_apply(dev, flow, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_apply(dev, flow, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_remove().
+ * Mutex-protected thunk to lock-free __flow_dv_remove().
*/
static void
-flow_d_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_remove(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_remove(dev, flow);
+ flow_dv_shared_unlock(dev);
}
/*
- * Mutex-protected thunk to flow_dv_destroy().
+ * Mutex-protected thunk to lock-free __flow_dv_destroy().
*/
static void
-flow_d_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_destroy(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_destroy(dev, flow);
+ flow_dv_shared_unlock(dev);
}
const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.validate = flow_dv_validate,
.prepare = flow_dv_prepare,
- .translate = flow_d_translate,
- .apply = flow_d_apply,
- .remove = flow_d_remove,
- .destroy = flow_d_destroy,
+ .translate = flow_dv_translate,
+ .apply = flow_dv_apply,
+ .remove = flow_dv_remove,
+ .destroy = flow_dv_destroy,
.query = flow_dv_query,
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 08/20] net/mlx5: check metadata registers availability
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (6 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 07/20] net/mlx5: rename structure and function Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 09/20] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
` (14 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The metadata registers reg_c provide support for TAG and
SET_TAG features. Although there are 8 registers are available
on the current mlx5 devices, some of them can be reserved.
The availability should be queried by iterative trial-and-error
implemented by mlx5_flow_discover_mreg_c() routine.
If reg_c is available, it can be regarded inclusively that
the extensive metadata support is possible. E.g. metadata
register copy action, supporting 16 modify header actions
(instead of 8 by default) preserving register across
different domains (FDB and NIC) and so on.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 11 +++++
drivers/net/mlx5/mlx5.h | 11 ++++-
drivers/net/mlx5/mlx5_ethdev.c | 8 +++-
drivers/net/mlx5/mlx5_flow.c | 98 +++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow.h | 13 ------
drivers/net/mlx5/mlx5_flow_dv.c | 9 ++--
drivers/net/mlx5/mlx5_prm.h | 18 ++++++++
7 files changed, 148 insertions(+), 20 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 72c30bf..1b86b7b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2341,6 +2341,17 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
+ if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
+ DRV_LOG(DEBUG,
+ "port %u extensive metadata register is not supported",
+ eth_dev->data->port_id);
+ }
return eth_dev;
error:
if (priv) {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..6b82c6d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -37,6 +37,7 @@
#include "mlx5_autoconf.h"
#include "mlx5_defs.h"
#include "mlx5_glue.h"
+#include "mlx5_prm.h"
enum {
PCI_VENDOR_ID_MELLANOX = 0x15b3,
@@ -252,6 +253,8 @@ struct mlx5_dev_config {
} mprq; /* Configurations for Multi-Packet RQ. */
int mps; /* Multi-packet send supported mode. */
unsigned int flow_prio; /* Number of flow priorities. */
+ enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM];
+ /* Availibility of mreg_c's. */
unsigned int tso_max_payload_sz; /* Maximum TCP payload for TSO. */
unsigned int ind_table_max_size; /* Maximum indirection table size. */
unsigned int max_dump_files_num; /* Maximum dump files per queue. */
@@ -561,6 +564,10 @@ struct mlx5_flow_tbl_resource {
#define MLX5_MAX_TABLES UINT16_MAX
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
+/* Reserve the last two tables for metadata register copy. */
+#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
+ (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -786,7 +793,7 @@ int mlx5_dev_to_pci_addr(const char *dev_path,
int mlx5_is_removed(struct rte_eth_dev *dev);
eth_tx_burst_t mlx5_select_tx_function(struct rte_eth_dev *dev);
eth_rx_burst_t mlx5_select_rx_function(struct rte_eth_dev *dev);
-struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port);
+struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port, bool valid);
struct mlx5_priv *mlx5_dev_to_eswitch_info(struct rte_eth_dev *dev);
int mlx5_sysfs_switch_info(unsigned int ifindex,
struct mlx5_switch_info *info);
@@ -866,6 +873,8 @@ int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
/* mlx5_flow.c */
+int mlx5_flow_discover_mreg_c(struct rte_eth_dev *eth_dev);
+bool mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev);
int mlx5_flow_discover_priorities(struct rte_eth_dev *dev);
void mlx5_flow_print(struct rte_flow *flow);
int mlx5_flow_validate(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..2b7c867 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -1793,6 +1793,10 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
*
* @param[in] port
* Device port id.
+ * @param[in] valid
+ * Device port id is valid, skip check. This flag is useful
+ * when trials are performed from probing and device is not
+ * flagged as valid yet (in attaching process).
* @param[out] es_domain_id
* E-Switch domain id.
* @param[out] es_port_id
@@ -1803,7 +1807,7 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
* on success, NULL otherwise and rte_errno is set.
*/
struct mlx5_priv *
-mlx5_port_to_eswitch_info(uint16_t port)
+mlx5_port_to_eswitch_info(uint16_t port, bool valid)
{
struct rte_eth_dev *dev;
struct mlx5_priv *priv;
@@ -1812,7 +1816,7 @@ struct mlx5_priv *
rte_errno = EINVAL;
return NULL;
}
- if (!rte_eth_dev_is_valid_port(port)) {
+ if (!valid && !rte_eth_dev_is_valid_port(port)) {
rte_errno = ENODEV;
return NULL;
}
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6e6c845..f32ea8d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -368,6 +368,33 @@ static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
NULL, "invalid feature name");
}
+
+/**
+ * Check extensive flow metadata register support.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * True if device supports extensive flow metadata register, otherwise false.
+ */
+bool
+mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
+ /*
+ * Having available reg_c can be regarded inclusively as supporting
+ * extensive flow metadata register, which could mean,
+ * - metadata register copy action by modify header.
+ * - 16 modify header actions is supported.
+ * - reg_c's are preserved across different domain (FDB and NIC) on
+ * packet loopback by flow lookup miss.
+ */
+ return config->flow_mreg_c[2] != REG_NONE;
+}
+
/**
* Discover the maximum number of priority available.
*
@@ -4033,3 +4060,74 @@ struct rte_flow *
}
return 0;
}
+
+/**
+ * Discover availability of metadata reg_c's.
+ *
+ * Iteratively use test flows to check availability.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ enum modify_reg idx;
+ int n = 0;
+
+ /* reg_c[0] and reg_c[1] are reserved. */
+ config->flow_mreg_c[n++] = REG_C_0;
+ config->flow_mreg_c[n++] = REG_C_1;
+ /* Discover availability of other reg_c's. */
+ for (idx = REG_C_2; idx <= REG_C_7; ++idx) {
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .priority = MLX5_FLOW_PRIO_RSVD,
+ .ingress = 1,
+ };
+ struct rte_flow_item items[] = {
+ [0] = {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action actions[] = {
+ [0] = {
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &(struct mlx5_flow_action_copy_mreg){
+ .src = REG_C_1,
+ .dst = idx,
+ },
+ },
+ [1] = {
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &(struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ },
+ },
+ [2] = {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ struct rte_flow *flow;
+ struct rte_flow_error error;
+
+ if (!config->dv_flow_en)
+ break;
+ /* Create internal flow, validation skips copy action. */
+ flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, &error);
+ if (!flow)
+ continue;
+ if (dev->data->dev_started || !flow_drv_apply(dev, flow, NULL))
+ config->flow_mreg_c[n++] = idx;
+ flow_list_destroy(dev, NULL, flow);
+ }
+ for (; n < MLX5_MREG_C_NUM; ++n)
+ config->flow_mreg_c[n] = REG_NONE;
+ return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b9a9507..f2b6726 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -27,19 +27,6 @@
#include "mlx5.h"
#include "mlx5_prm.h"
-enum modify_reg {
- REG_A,
- REG_B,
- REG_C_0,
- REG_C_1,
- REG_C_2,
- REG_C_3,
- REG_C_4,
- REG_C_5,
- REG_C_6,
- REG_C_7,
-};
-
/* Private rte flow items. */
enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 853e879..8b93022 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -830,6 +830,7 @@ struct field_modify_info modify_tcp[] = {
}
static enum mlx5_modification_field reg_to_field[] = {
+ [REG_NONE] = MLX5_MODI_OUT_NONE,
[REG_A] = MLX5_MODI_META_DATA_REG_A,
[REG_B] = MLX5_MODI_META_DATA_REG_B,
[REG_C_0] = MLX5_MODI_META_REG_C_0,
@@ -1038,7 +1039,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
if (!spec)
return 0;
- esw_priv = mlx5_port_to_eswitch_info(spec->id);
+ esw_priv = mlx5_port_to_eswitch_info(spec->id, false);
if (!esw_priv)
return rte_flow_error_set(error, rte_errno,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, spec,
@@ -2695,7 +2696,7 @@ struct field_modify_info modify_tcp[] = {
"failed to obtain E-Switch info");
port_id = action->conf;
port = port_id->original ? dev->data->port_id : port_id->id;
- act_priv = mlx5_port_to_eswitch_info(port);
+ act_priv = mlx5_port_to_eswitch_info(port, false);
if (!act_priv)
return rte_flow_error_set
(error, rte_errno,
@@ -5090,7 +5091,7 @@ struct field_modify_info modify_tcp[] = {
mask = pid_m ? pid_m->id : 0xffff;
id = pid_v ? pid_v->id : dev->data->port_id;
- priv = mlx5_port_to_eswitch_info(id);
+ priv = mlx5_port_to_eswitch_info(id, item == NULL);
if (!priv)
return -rte_errno;
/* Translate to vport field or to metadata, depending on mode. */
@@ -5538,7 +5539,7 @@ struct field_modify_info modify_tcp[] = {
(const struct rte_flow_action_port_id *)action->conf;
port = conf->original ? dev->data->port_id : conf->id;
- priv = mlx5_port_to_eswitch_info(port);
+ priv = mlx5_port_to_eswitch_info(port, false);
if (!priv)
return rte_flow_error_set(error, -rte_errno,
RTE_FLOW_ERROR_TYPE_ACTION,
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b9e53f5..c17ba66 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -392,6 +392,7 @@ enum {
/* The field of packet to be modified. */
enum mlx5_modification_field {
+ MLX5_MODI_OUT_NONE = -1,
MLX5_MODI_OUT_SMAC_47_16 = 1,
MLX5_MODI_OUT_SMAC_15_0,
MLX5_MODI_OUT_ETHERTYPE,
@@ -455,6 +456,23 @@ enum mlx5_modification_field {
MLX5_MODI_IN_TCP_ACK_NUM = 0x5C,
};
+/* Total number of metadata reg_c's. */
+#define MLX5_MREG_C_NUM (MLX5_MODI_META_REG_C_7 - MLX5_MODI_META_REG_C_0 + 1)
+
+enum modify_reg {
+ REG_NONE = 0,
+ REG_A,
+ REG_B,
+ REG_C_0,
+ REG_C_1,
+ REG_C_2,
+ REG_C_3,
+ REG_C_4,
+ REG_C_5,
+ REG_C_6,
+ REG_C_7,
+};
+
/* Modification sub command. */
struct mlx5_modification_cmd {
union {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 09/20] net/mlx5: add devarg for extensive metadata support
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (7 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 08/20] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 10/20] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
` (13 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/mlx5.rst | 49 ++++++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.c | 33 +++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_defs.h | 4 ++++
drivers/net/mlx5/mlx5_prm.h | 3 +++
5 files changed, 90 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..8870969 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -578,6 +578,55 @@ Run-time configuration
Disabled by default.
+- ``dv_xmeta_en`` parameter [int]
+
+ A nonzero value enables extensive flow metadata support if device is
+ capable and driver supports it. This can enable extensive support of
+ ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
+ ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
+
+ There are some possible configurations, depending on parameter value:
+
+ - 0, this is default value, defines the legacy mode, the ``MARK`` and
+ ``META`` related actions and items operate only within NIC Tx and
+ NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
+ the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
+ item is 32 bits wide.
+
+ - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
+ compatibility, the actual width of data transferred to the FDB domain
+ depends on kernel configuration and may be vary. The actual supported
+ width can be retrieved in runtime by series of rte_flow_validate()
+ trials.
+
+ - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 24 bits. The actual supported width can be retrieved in runtime by
+ series of rte_flow_validate() trials.
+
+ +------+-----------+-----------+-------------+-------------+
+ | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
+ +======+===========+===========+=============+=============+
+ | 0 | 24 bits | 32 bits | 32 bits | no |
+ +------+-----------+-----------+-------------+-------------+
+ | 1 | 24 bits | vary 0-32 | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+ | 2 | vary 0-32 | 32 bits | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+
+ If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
+ ignored and the device is configured to operate in legacy mode (0).
+
+ Disabled by default (set to 0).
+
- ``dv_flow_en`` parameter [int]
A nonzero value enables the DV flow steering assuming it is supported
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1b86b7b..943d0e8 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -125,6 +125,9 @@
/* Activate DV flow steering. */
#define MLX5_DV_FLOW_EN "dv_flow_en"
+/* Enable extensive flow metadata support. */
+#define MLX5_DV_XMETA_EN "dv_xmeta_en"
+
/* Activate Netlink support in VF mode. */
#define MLX5_VF_NL_EN "vf_nl_en"
@@ -1310,6 +1313,16 @@ struct mlx5_flow_id_pool *
config->dv_esw_en = !!tmp;
} else if (strcmp(MLX5_DV_FLOW_EN, key) == 0) {
config->dv_flow_en = !!tmp;
+ } else if (strcmp(MLX5_DV_XMETA_EN, key) == 0) {
+ if (tmp != MLX5_XMETA_MODE_LEGACY &&
+ tmp != MLX5_XMETA_MODE_META16 &&
+ tmp != MLX5_XMETA_MODE_META32) {
+ DRV_LOG(WARNING, "invalid extensive "
+ "metadata parameter");
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ config->dv_xmeta_en = tmp;
} else if (strcmp(MLX5_MR_EXT_MEMSEG_EN, key) == 0) {
config->mr_ext_memseg_en = !!tmp;
} else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) {
@@ -1361,6 +1374,7 @@ struct mlx5_flow_id_pool *
MLX5_VF_NL_EN,
MLX5_DV_ESW_EN,
MLX5_DV_FLOW_EN,
+ MLX5_DV_XMETA_EN,
MLX5_MR_EXT_MEMSEG_EN,
MLX5_REPRESENTOR,
MLX5_MAX_DUMP_FILES_NUM,
@@ -1734,6 +1748,12 @@ struct mlx5_flow_id_pool *
rte_errno = EINVAL;
return rte_errno;
}
+ if (sh_conf->dv_xmeta_en ^ config->dv_xmeta_en) {
+ DRV_LOG(ERR, "\"dv_xmeta_en\" configuration mismatch"
+ " for shared %s context", sh->ibdev_name);
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
return 0;
}
/**
@@ -2347,10 +2367,23 @@ struct mlx5_flow_id_pool *
err = -err;
goto error;
}
+ if (!priv->config.dv_esw_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(WARNING, "metadata mode %u is not supported "
+ "(no E-Switch)", priv->config.dv_xmeta_en);
+ priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
eth_dev->data->port_id);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata registers available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
}
return eth_dev;
error:
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6b82c6d..e59f8f6 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -238,6 +238,7 @@ struct mlx5_dev_config {
unsigned int vf_nl_en:1; /* Enable Netlink requests in VF mode. */
unsigned int dv_esw_en:1; /* Enable E-Switch DV flow. */
unsigned int dv_flow_en:1; /* Enable DV flow. */
+ unsigned int dv_xmeta_en:2; /* Enable extensive flow metadata. */
unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */
unsigned int devx:1; /* Whether devx interface is available or not. */
unsigned int dest_tir:1; /* Whether advanced DR API is available. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index e36ab55..a77c430 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -141,6 +141,10 @@
/* Cache size of mempool for Multi-Packet RQ. */
#define MLX5_MPRQ_MP_CACHE_SZ 32U
+#define MLX5_XMETA_MODE_LEGACY 0
+#define MLX5_XMETA_MODE_META16 1
+#define MLX5_XMETA_MODE_META32 2
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index c17ba66..b405cb6 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -226,6 +226,9 @@
/* Default mark value used when none is provided. */
#define MLX5_FLOW_MARK_DEFAULT 0xffffff
+/* Default mark mask for metadata legacy mode. */
+#define MLX5_FLOW_MARK_MASK 0xffffff
+
/* Maximum number of DS in WQE. Limited by 6-bit field. */
#define MLX5_DSEG_MAX 63
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 10/20] net/mlx5: adjust shared register according to mask
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (8 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 09/20] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 11/20] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
` (12 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The metadata register reg_c[0] might be used by kernel or
firmware for their internal purposes. The actual used mask
can be queried from the kernel. The remaining bits can be
used by PMD to provide META or MARK feature. The code queries
the mask of reg_c[0] and adjust the resource usage dynamically.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 95 +++++++++++++++++++++++++++++++++++------
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_flow_dv.c | 26 +++++++++--
3 files changed, 107 insertions(+), 17 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 943d0e8..fb7b94b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1584,6 +1584,60 @@ struct mlx5_flow_id_pool *
}
/**
+ * Configures the metadata mask fields in the shared context.
+ *
+ * @param [in] dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_set_metadata_mask(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ibv_shared *sh = priv->sh;
+ uint32_t meta, mark, reg_c0;
+
+ reg_c0 = ~priv->vport_meta_mask;
+ switch (priv->config.dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ meta = UINT32_MAX;
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META16:
+ meta = reg_c0 >> rte_bsf32(reg_c0);
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META32:
+ meta = UINT32_MAX;
+ mark = (reg_c0 >> rte_bsf32(reg_c0)) & MLX5_FLOW_MARK_MASK;
+ break;
+ default:
+ meta = 0;
+ mark = 0;
+ assert(false);
+ break;
+ }
+ if (sh->dv_mark_mask && sh->dv_mark_mask != mark)
+ DRV_LOG(WARNING, "metadata MARK mask mismatche %08X:%08X",
+ sh->dv_mark_mask, mark);
+ else
+ sh->dv_mark_mask = mark;
+ if (sh->dv_meta_mask && sh->dv_meta_mask != meta)
+ DRV_LOG(WARNING, "metadata META mask mismatche %08X:%08X",
+ sh->dv_meta_mask, meta);
+ else
+ sh->dv_meta_mask = meta;
+ if (sh->dv_regc0_mask && sh->dv_regc0_mask != reg_c0)
+ DRV_LOG(WARNING, "metadata reg_c0 mask mismatche %08X:%08X",
+ sh->dv_meta_mask, reg_c0);
+ else
+ sh->dv_regc0_mask = reg_c0;
+ DRV_LOG(DEBUG, "metadata mode %u", priv->config.dv_xmeta_en);
+ DRV_LOG(DEBUG, "metadata MARK mask %08X", sh->dv_mark_mask);
+ DRV_LOG(DEBUG, "metadata META mask %08X", sh->dv_meta_mask);
+ DRV_LOG(DEBUG, "metadata reg_c0 mask %08X", sh->dv_regc0_mask);
+}
+
+/**
* Allocate page of door-bells and register it using DevX API.
*
* @param [in] dev
@@ -1803,7 +1857,7 @@ struct mlx5_flow_id_pool *
uint16_t port_id;
unsigned int i;
#ifdef HAVE_MLX5DV_DR_DEVX_PORT
- struct mlx5dv_devx_port devx_port;
+ struct mlx5dv_devx_port devx_port = { .comp_mask = 0 };
#endif
/* Determine if this port representor is supposed to be spawned. */
@@ -2035,13 +2089,17 @@ struct mlx5_flow_id_pool *
* vport index. The engaged part of metadata register is
* defined by mask.
*/
- devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
- MLX5DV_DEVX_PORT_MATCH_REG_C_0;
- err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port, &devx_port);
- if (err) {
- DRV_LOG(WARNING, "can't query devx port %d on device %s",
- spawn->ibv_port, spawn->ibv_dev->name);
- devx_port.comp_mask = 0;
+ if (switch_info->representor || switch_info->master) {
+ devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
+ MLX5DV_DEVX_PORT_MATCH_REG_C_0;
+ err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port,
+ &devx_port);
+ if (err) {
+ DRV_LOG(WARNING,
+ "can't query devx port %d on device %s",
+ spawn->ibv_port, spawn->ibv_dev->name);
+ devx_port.comp_mask = 0;
+ }
}
if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) {
priv->vport_meta_tag = devx_port.reg_c_0.value;
@@ -2361,18 +2419,27 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
- /* Query availibility of metadata reg_c's. */
- err = mlx5_flow_discover_mreg_c(eth_dev);
- if (err < 0) {
- err = -err;
- goto error;
- }
if (!priv->config.dv_esw_en &&
priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
DRV_LOG(WARNING, "metadata mode %u is not supported "
"(no E-Switch)", priv->config.dv_xmeta_en);
priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
}
+ mlx5_set_metadata_mask(eth_dev);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ !priv->sh->dv_regc0_mask) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata reg_c[0] is available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e59f8f6..92d445a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -622,6 +622,9 @@ struct mlx5_ibv_shared {
} mr;
/* Shared DV/DR flow data section. */
pthread_mutex_t dv_mutex; /* DV context mutex. */
+ uint32_t dv_meta_mask; /* flow META metadata supported mask. */
+ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */
+ uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */
uint32_t dv_refcnt; /* DV/DR data reference counter. */
void *fdb_domain; /* FDB Direct Rules name space handle. */
struct mlx5_flow_tbl_resource fdb_tbl[MLX5_MAX_TABLES_FDB];
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b93022..049d6ae 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -899,13 +899,13 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev,
struct mlx5_flow_dv_modify_hdr_resource *res,
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
const struct mlx5_flow_action_copy_mreg *conf = action->conf;
- uint32_t mask = RTE_BE32(UINT32_MAX);
+ rte_be32_t mask = RTE_BE32(UINT32_MAX);
struct rte_flow_item item = {
.spec = NULL,
.mask = &mask,
@@ -915,9 +915,29 @@ struct field_modify_info modify_tcp[] = {
{0, 0, 0},
};
struct field_modify_info reg_dst = {
- .offset = (uint32_t)-1, /* Same as src. */
+ .offset = 0,
.id = reg_to_field[conf->dst],
};
+ /* Adjust reg_c[0] usage according to reported mask. */
+ if (conf->dst == REG_C_0 || conf->src == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t reg_c0 = priv->sh->dv_regc0_mask;
+
+ assert(reg_c0);
+ assert(priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY);
+ if (conf->dst == REG_C_0) {
+ /* Copy to reg_c[0], within mask only. */
+ reg_dst.offset = rte_bsf32(reg_c0);
+ /*
+ * Mask is ignoring the enianness, because
+ * there is no conversion in datapath.
+ */
+ mask = reg_c0 >> reg_dst.offset;
+ } else {
+ /* Copy from reg_c[0] to destination lower bits. */
+ mask = rte_cpu_to_be_32(reg_c0);
+ }
+ }
return flow_dv_convert_modify_action(&item,
reg_src, ®_dst, res,
MLX5_MODIFICATION_TYPE_COPY,
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 11/20] net/mlx5: check the maximal modify actions number
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (9 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 10/20] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 12/20] net/mlx5: update metadata register id query Viacheslav Ovsiienko
` (11 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
If the extensive metadata registers are supported,
it can be regarded inclusively that the extensive
metadata support is possible. E.g. metadata register
copy action, supporting 16 modify header actions,
reserving register across different steering domain
(FDB and NIC) and so on.
This patch handles the maximal amount of header modify
actions depending on discovered metadata registers
support.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 9 +++++++--
drivers/net/mlx5/mlx5_flow_dv.c | 25 +++++++++++++++++++++++++
2 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f2b6726..c1d0a65 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -348,8 +348,13 @@ struct mlx5_flow_dv_tag_resource {
uint32_t tag; /**< the tag value. */
};
-/* Number of modification commands. */
-#define MLX5_MODIFY_NUM 8
+/*
+ * Number of modification commands.
+ * If extensive metadata registers are supported
+ * the maximal actions amount is 16 and 8 otherwise.
+ */
+#define MLX5_MODIFY_NUM 16
+#define MLX5_MODIFY_NUM_NO_MREG 8
/* Modify resource structure */
struct mlx5_flow_dv_modify_hdr_resource {
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 049d6ae..f83c6ff 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2732,6 +2732,27 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get the maximum number of modify header actions.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * Max number of modify header actions device can support.
+ */
+static unsigned int
+flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev)
+{
+ /*
+ * There's no way to directly query the max cap. Although it has to be
+ * acquried by iterative trial, it is a safe assumption that more
+ * actions are supported by FW if extensive metadata register is
+ * supported.
+ */
+ return mlx5_flow_ext_mreg_supported(dev) ? MLX5_MODIFY_NUM :
+ MLX5_MODIFY_NUM_NO_MREG;
+}
+/**
* Find existing modify-header resource or create and register a new one.
*
* @param dev[in, out]
@@ -2758,6 +2779,10 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_flow_dv_modify_hdr_resource *cache_resource;
struct mlx5dv_dr_domain *ns;
+ if (resource->actions_num > flow_dv_modify_hdr_action_max(dev))
+ return rte_flow_error_set(error, EOVERFLOW,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many modify header items");
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
ns = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 12/20] net/mlx5: update metadata register id query
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (10 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 11/20] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 13/20] net/mlx5: add flow tag support Viacheslav Ovsiienko
` (10 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The NIC might support up to 8 extensive metadata registers.
These registers are supposed to be used by multiple features.
There is register id query routine to allow determine which
register is actually used by specified feature.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 76 +++++++++++++++++++++++++++++---------------
drivers/net/mlx5/mlx5_flow.h | 17 ++++++++++
2 files changed, 67 insertions(+), 26 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f32ea8d..c38208c 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -316,12 +316,6 @@ struct mlx5_flow_tunnel_info {
},
};
-enum mlx5_feature_name {
- MLX5_HAIRPIN_RX,
- MLX5_HAIRPIN_TX,
- MLX5_APPLICATION,
-};
-
/**
* Translate tag ID to register.
*
@@ -338,37 +332,66 @@ enum mlx5_feature_name {
* The request register on success, a negative errno
* value otherwise and rte_errno is set.
*/
-__rte_unused
-static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
- enum mlx5_feature_name feature,
- uint32_t id,
- struct rte_flow_error *error)
+enum modify_reg
+mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error)
{
- static enum modify_reg id2reg[] = {
- [0] = REG_A,
- [1] = REG_C_2,
- [2] = REG_C_3,
- [3] = REG_C_4,
- [4] = REG_B,};
-
- dev = (void *)dev;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
switch (feature) {
case MLX5_HAIRPIN_RX:
return REG_B;
case MLX5_HAIRPIN_TX:
return REG_A;
- case MLX5_APPLICATION:
- if (id > 4)
+ case MLX5_METADATA_RX:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_B;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_0;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_1;
+ }
+ break;
+ case MLX5_METADATA_TX:
+ return REG_A;
+ case MLX5_METADATA_FDB:
+ return REG_C_0;
+ case MLX5_FLOW_MARK:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_NONE;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_1;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_0;
+ }
+ break;
+ case MLX5_COPY_MARK:
+ return REG_C_2;
+ case MLX5_APP_TAG:
+ /*
+ * Suppose engaging reg_c_2 .. reg_c_7 registers.
+ * reg_c_2 is reserved for coloring by meters.
+ */
+ if (id > (REG_C_7 - REG_C_3))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL, "invalid tag id");
- return id2reg[id];
+ if (config->flow_mreg_c[id + REG_C_3 - REG_C_0] == REG_NONE)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL, "unsupported tag id");
+ return config->flow_mreg_c[id + REG_C_3 - REG_C_0];
}
+ assert(false);
return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
NULL, "invalid feature name");
}
-
/**
* Check extensive flow metadata register support.
*
@@ -2667,7 +2690,6 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
struct mlx5_rte_flow_item_tag *tag_item;
struct rte_flow_item *item;
char *addr;
- struct rte_flow_error error;
int encap = 0;
mlx5_flow_id_get(priv->sh->flow_id_pool, flow_id);
@@ -2733,7 +2755,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
rte_memcpy(actions_rx, actions, sizeof(struct rte_flow_action));
actions_rx++;
set_tag = (void *)actions_rx;
- set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
+ set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
@@ -2743,7 +2766,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
tag_item->data = *flow_id;
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ tag_item->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c1d0a65..9371e11 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -63,6 +63,18 @@ struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
};
+/* Feature name to allocate metadata register. */
+enum mlx5_feature_name {
+ MLX5_HAIRPIN_RX,
+ MLX5_HAIRPIN_TX,
+ MLX5_METADATA_RX,
+ MLX5_METADATA_TX,
+ MLX5_METADATA_FDB,
+ MLX5_FLOW_MARK,
+ MLX5_APP_TAG,
+ MLX5_COPY_MARK,
+};
+
/* Pattern outer Layer bits. */
#define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0)
#define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1)
@@ -534,6 +546,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_query_t query;
};
+
#define MLX5_CNT_CONTAINER(sh, batch, thread) (&(sh)->cmng.ccont \
[(((sh)->cmng.mhi[batch] >> (thread)) & 0x1) * 2 + (batch)])
#define MLX5_CNT_CONTAINER_UNUSED(sh, batch, thread) (&(sh)->cmng.ccont \
@@ -554,6 +567,10 @@ uint64_t mlx5_flow_hashfields_adjust(struct mlx5_flow *dev_flow, int tunnel,
uint64_t hash_fields);
uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
uint32_t subpriority);
+enum modify_reg mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error);
const struct rte_flow_action *mlx5_flow_find_action
(const struct rte_flow_action *actions,
enum rte_flow_action_type action);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 13/20] net/mlx5: add flow tag support
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (11 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 12/20] net/mlx5: update metadata register id query Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 14/20] net/mlx5: extend flow mark support Viacheslav Ovsiienko
` (9 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add support of new rte_flow item and action - TAG and SET_TAG. TAG is
a transient value which can be kept during flow matching.
This is supported through device metadata register reg_c[]. Although
there are 8 registers are available on the current mlx5 device,
some of them can be reserved for firmware or kernel purposes.
The availability should be queried by iterative trial-and-error
mlx5_flow_discover_mreg_c() routine.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 232 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 228 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f83c6ff..08e78b0 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -870,10 +870,12 @@ struct field_modify_info modify_tcp[] = {
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"too many items to modify");
+ assert(conf->id != REG_NONE);
+ assert(conf->id < RTE_DIM(reg_to_field));
actions[i].action_type = MLX5_MODIFICATION_TYPE_SET;
actions[i].field = reg_to_field[conf->id];
actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- actions[i].data1 = conf->data;
+ actions[i].data1 = rte_cpu_to_be_32(conf->data);
++i;
resource->actions_num = i;
if (!resource->actions_num)
@@ -884,6 +886,52 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert SET_TAG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_tag
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_action_set_tag *conf,
+ struct rte_flow_error *error)
+{
+ rte_be32_t data = rte_cpu_to_be_32(conf->data);
+ rte_be32_t mask = rte_cpu_to_be_32(conf->mask);
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum mlx5_modification_field reg_type;
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ assert((unsigned int)ret < RTE_DIM(reg_to_field));
+ reg_type = reg_to_field[ret];
+ assert(reg_type > 0);
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_type};
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Convert internal COPY_REG action to DV specification.
*
* @param[in] dev
@@ -999,6 +1047,65 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate TAG item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tag *spec = item->spec;
+ const struct rte_flow_item_tag *mask = item->mask;
+ const struct rte_flow_item_tag nic_mask = {
+ .data = RTE_BE32(UINT32_MAX),
+ .index = 0xff,
+ };
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extensive metadata register"
+ " isn't supported");
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (!mask)
+ mask = &rte_flow_item_tag_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_tag),
+ error);
+ if (ret < 0)
+ return ret;
+ if (mask->index != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
+ "partial mask for tag index"
+ " is not supported");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, spec->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ return 0;
+}
+
+/**
* Validate vport item.
*
* @param[in] dev
@@ -1359,6 +1466,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_TAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_tag *conf;
+ const uint64_t terminal_action_flags =
+ MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE |
+ MLX5_FLOW_ACTION_RSS;
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extensive metadata register"
+ " isn't supported");
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_tag *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ if (!attr->transfer && attr->ingress &&
+ (action_flags & terminal_action_flags))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "set_tag has no effect"
+ " with terminal actions");
+ return 0;
+}
+
+/**
* Validate count action.
*
* @param[in] dev
@@ -3748,6 +3911,13 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ ret = flow_dv_validate_item_tag(dev, items,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
break;
@@ -3795,6 +3965,17 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK;
++actions_n;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ ret = flow_dv_validate_action_set_tag(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
ret = mlx5_flow_validate_action_drop(action_flags,
attr, error);
@@ -5082,8 +5263,38 @@ struct field_modify_info modify_tcp[] = {
{
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
- enum modify_reg reg = tag_v->id;
+ assert(tag_v);
+ flow_dv_match_meta_reg(matcher, key, tag_v->id, tag_v->data,
+ tag_m ? tag_m->data : UINT32_MAX);
+}
+
+/**
+ * Add TAG item to matcher
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_tag(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ const struct rte_flow_item_tag *tag_v = item->spec;
+ const struct rte_flow_item_tag *tag_m = item->mask;
+ enum modify_reg reg;
+
+ assert(tag_v);
+ tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask;
+ /* Get the metadata register index for the tag. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL);
+ assert(reg > 0);
flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
@@ -5758,6 +5969,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ if (flow_dv_convert_action_set_tag
+ (dev, &mhdr_res,
+ (const struct rte_flow_action_set_tag *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
@@ -6038,7 +6257,7 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
- if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
+ if (mhdr_res.actions_num) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
(dev, &mhdr_res, dev_flow, error))
@@ -6050,7 +6269,7 @@ struct field_modify_info modify_tcp[] = {
default:
break;
}
- if ((action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) &&
+ if (mhdr_res.actions_num &&
modify_action_position == UINT32_MAX)
modify_action_position = actions_n++;
}
@@ -6213,6 +6432,11 @@ struct field_modify_info modify_tcp[] = {
items, tunnel);
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ flow_dv_translate_item_tag(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
flow_dv_translate_mlx5_item_tag(match_mask,
match_value, items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 14/20] net/mlx5: extend flow mark support
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (12 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 13/20] net/mlx5: add flow tag support Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 15/20] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
` (8 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Flow MARK item is newly supported along with MARK action. MARK
action and item are supported on both Rx and Tx. It works on the
metadata reg_c[] only if extensive flow metadata register is
supported. Without the support, MARK action behaves same as
before - valid only on Rx and no MARK item is valid.
FLAG action is also modified accordingly. FLAG action is
supported on both Rx and Tx via reg_c[] if extensive flow
metadata register is supported.
However, the new MARK/FLAG item and action are currently
disabled until register copy on loopback is supported by
forthcoming patches.
The actual index of engaged metadata reg_c[] register to
support FLAG/MARK actions depends on dv_xmeta_en devarg value.
For extensive metadata mode 1 the reg_c[1] is used and
transitive MARK data width is 24. For extensive metadata mode 2
the reg_c[0] is used and transitive MARK data width might be
restricted to 0 or 16 bits, depending on kernel usage of reg_c[0].
The actual supported width can be discovered by series of trials
with rte_flow_validate().
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 5 +-
drivers/net/mlx5/mlx5_flow_dv.c | 383 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 370 insertions(+), 18 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9371e11..d6209ff 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -102,6 +102,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ITEM_METADATA (1u << 16)
#define MLX5_FLOW_ITEM_PORT_ID (1u << 17)
#define MLX5_FLOW_ITEM_TAG (1u << 18)
+#define MLX5_FLOW_ITEM_MARK (1u << 19)
/* Pattern MISC bits. */
#define MLX5_FLOW_LAYER_ICMP (1u << 19)
@@ -194,6 +195,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_INC_TCP_ACK (1u << 30)
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
+#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -228,7 +230,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_INC_TCP_ACK | \
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
- MLX5_FLOW_ACTION_SET_TAG)
+ MLX5_FLOW_ACTION_SET_TAG | \
+ MLX5_FLOW_ACTION_MARK_EXT)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 08e78b0..5714a6d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -993,6 +993,125 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert MARK action to DV specification. This routine is used
+ * in extensive metadata only and requires metadata register to be
+ * handled. In legacy mode hardware tag resource is engaged.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] conf
+ * Pointer to MARK action specification.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action_mark *conf,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ rte_be32_t mask = rte_cpu_to_be_32(MLX5_FLOW_MARK_MASK &
+ priv->sh->dv_mark_mask);
+ rte_be32_t data = rte_cpu_to_be_32(conf->id) & mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ {4, 0, 0}, /* dynamic instead of MLX5_MODI_META_REG_C_1. */
+ {0, 0, 0},
+ };
+ enum modify_reg reg;
+
+ if (!mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "zero mark action mask");
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (reg < 0)
+ return reg;
+ assert(reg > 0);
+ reg_c_x[0].id = reg_to_field[reg];
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
+ * Validate MARK item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_item_mark *spec = item->spec;
+ const struct rte_flow_item_mark *mask = item->mask;
+ const struct rte_flow_item_mark nic_mask = {
+ .id = priv->sh->dv_mark_mask,
+ };
+ int ret;
+
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata feature"
+ " isn't enabled");
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ if (!nic_mask.id)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (spec->id >= (MLX5_FLOW_MARK_MAX & nic_mask.id))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &spec->id,
+ "mark id exceeds the limit");
+ if (!mask)
+ mask = &nic_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_mark),
+ error);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -1465,6 +1584,139 @@ struct field_modify_info modify_tcp[] = {
return 0;
}
+/*
+ * Validate the FLAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_flag(struct rte_eth_dev *dev,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_flag(action_flags, attr,
+ error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support flag action");
+ if (!(priv->sh->dv_mark_mask & MLX5_FLOW_MARK_DEFAULT))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't mark and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 flag"
+ " actions in same flow");
+ return 0;
+}
+
+/**
+ * Validate MARK action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action_mark *mark = action->conf;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_mark(action, action_flags,
+ attr, error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support mark action");
+ if (!priv->sh->dv_mark_mask)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (!mark)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ if (mark->id >= (MLX5_FLOW_MARK_MAX & priv->sh->dv_mark_mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &mark->id,
+ "mark id exceeds the limit");
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't flag and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 mark actions in same"
+ " flow");
+ return 0;
+}
+
/**
* Validate SET_TAG action.
*
@@ -3732,6 +3984,8 @@ struct field_modify_info modify_tcp[] = {
.dst_port = RTE_BE16(UINT16_MAX),
}
};
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
if (items == NULL)
return -1;
@@ -3888,6 +4142,14 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ ret = flow_dv_validate_item_mark(dev, items, attr,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
ret = flow_dv_validate_item_meta(dev, items, attr,
error);
@@ -3949,21 +4211,39 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
- ret = mlx5_flow_validate_action_flag(action_flags,
- attr, error);
+ ret = flow_dv_validate_action_flag(dev, action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_FLAG |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_MARK:
- ret = mlx5_flow_validate_action_mark(actions,
- action_flags,
- attr, error);
+ ret = flow_dv_validate_action_mark(dev, actions,
+ action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_MARK;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_MARK |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
@@ -4234,12 +4514,14 @@ struct field_modify_info modify_tcp[] = {
" actions in the same rule");
/* Eswitch has few restrictions on using items and actions */
if (attr->transfer) {
- if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_FLAG)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
"unsupported action FLAG");
- if (action_flags & MLX5_FLOW_ACTION_MARK)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_MARK)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
@@ -5202,6 +5484,44 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add MARK item to matcher
+ *
+ * @param[in] dev
+ * The device to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_mark(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_mark *mark;
+ uint32_t value;
+ uint32_t mask;
+
+ mark = item->mask ? (const void *)item->mask :
+ &rte_flow_item_mark_mask;
+ mask = mark->id & priv->sh->dv_mark_mask;
+ mark = (const void *)item->spec;
+ assert(mark);
+ value = mark->id & priv->sh->dv_mark_mask & mask;
+ if (mask) {
+ enum modify_reg reg;
+
+ /* Get the metadata register index for the mark. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, NULL);
+ assert(reg > 0);
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -5210,8 +5530,6 @@ struct field_modify_info modify_tcp[] = {
* Flow matcher value.
* @param[in] item
* Flow pattern to translate.
- * @param[in] inner
- * Item is inner pattern.
*/
static void
flow_dv_translate_item_meta(void *matcher, void *key,
@@ -5882,6 +6200,7 @@ struct field_modify_info modify_tcp[] = {
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
struct rte_flow *flow = dev_flow->flow;
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -5916,7 +6235,7 @@ struct field_modify_info modify_tcp[] = {
if (attr->transfer)
mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
- priority = priv->config.flow_prio - 1;
+ priority = dev_conf->flow_prio - 1;
for (; !actions_end ; actions++) {
const struct rte_flow_action_queue *queue;
const struct rte_flow_action_rss *rss;
@@ -5947,6 +6266,19 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_PORT_ID;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ struct rte_flow_action_mark mark = {
+ .id = MLX5_FLOW_MARK_DEFAULT,
+ };
+
+ if (flow_dv_convert_action_mark(dev, &mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
if (!dev_flow->dv.tag_resource)
@@ -5955,9 +6287,22 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ const struct rte_flow_action_mark *mark =
+ (const struct rte_flow_action_mark *)
+ actions->conf;
+
+ if (flow_dv_convert_action_mark(dev, mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
+ /* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
@@ -5967,7 +6312,6 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
@@ -6004,7 +6348,7 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
- if (!priv->config.devx) {
+ if (!dev_conf->devx) {
rte_errno = ENOTSUP;
goto cnt_err;
}
@@ -6417,6 +6761,11 @@ struct field_modify_info modify_tcp[] = {
items, last_item, tunnel);
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ flow_dv_translate_item_mark(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
flow_dv_translate_item_meta(match_mask, match_value,
items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 15/20] net/mlx5: extend flow meta data support
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (13 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 14/20] net/mlx5: extend flow mark support Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 16/20] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
` (7 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
META item is supported on both Rx and Tx. 'transfer' attribute
is also supported. SET_META action is also added.
Due to restriction on reg_c[meta], various bit width might be
available. If devarg parameter dv_xmeta_en=1, the META uses
metadata register reg_c[0], which may be required for internal
kernel or firmware needs. In this case PMD queries kernel about
available fields in reg_c[0] and restricts the register usage
accordingly. If devarg parameter dv_xmeta_en=2, the META feature
uses reg_c[1], there should be no limitations on the data width.
However, extensive MEAT feature is currently disabled until
register copy on loopback is supported by forthcoming patches.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 4 +-
drivers/net/mlx5/mlx5_flow_dv.c | 250 +++++++++++++++++++++++++++++++++++++---
2 files changed, 235 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index d6209ff..ef16aef 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -196,6 +196,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
+#define MLX5_FLOW_ACTION_SET_META (1ull << 34)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -231,7 +232,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
MLX5_FLOW_ACTION_SET_TAG | \
- MLX5_FLOW_ACTION_MARK_EXT)
+ MLX5_FLOW_ACTION_MARK_EXT | \
+ MLX5_FLOW_ACTION_SET_META)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 5714a6d..19f58cb 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1043,6 +1043,103 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get metadata register index for specified steering domain.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] attr
+ * Attributes of flow to determine steering domain.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * positive index on success, a negative errno value otherwise
+ * and rte_errno is set.
+ */
+static enum modify_reg
+flow_dv_get_metadata_reg(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ enum modify_reg reg =
+ mlx5_flow_get_reg_id(dev, attr->transfer ?
+ MLX5_METADATA_FDB :
+ attr->egress ?
+ MLX5_METADATA_TX :
+ MLX5_METADATA_RX, 0, error);
+ if (reg < 0)
+ return rte_flow_error_set(error,
+ ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL, "unavailable "
+ "metadata register");
+ return reg;
+}
+
+/**
+ * Convert SET_META action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_meta
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action_set_meta *conf,
+ struct rte_flow_error *error)
+{
+ uint32_t data = conf->data;
+ uint32_t mask = conf->mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum modify_reg reg = flow_dv_get_metadata_reg(dev, attr, error);
+
+ if (reg < 0)
+ return reg;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0 = rte_bsf32(msk_c0);
+
+ data = rte_cpu_to_be_32(data);
+ mask = rte_cpu_to_be_32(mask);
+ msk_c0 = rte_cpu_to_be_32(msk_c0);
+ mask <<= shl_c0;
+ data <<= shl_c0;
+ assert(msk_c0);
+ assert(!(~msk_c0 & mask));
+ data = rte_be_to_cpu_32(data);
+ mask = rte_be_to_cpu_32(mask);
+ }
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_to_field[reg]};
+ /* The routine expects parameters in memory as big-endian ones. */
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Validate MARK item.
*
* @param[in] dev
@@ -1132,11 +1229,14 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_attr *attr,
struct rte_flow_error *error)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
const struct rte_flow_item_meta *spec = item->spec;
const struct rte_flow_item_meta *mask = item->mask;
- const struct rte_flow_item_meta nic_mask = {
+ struct rte_flow_item_meta nic_mask = {
.data = UINT32_MAX
};
+ enum modify_reg reg;
int ret;
if (!spec)
@@ -1146,23 +1246,27 @@ struct field_modify_info modify_tcp[] = {
"data cannot be empty");
if (!spec->data)
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
- NULL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
"data cannot be zero");
+ if (config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg != REG_A && reg != REG_B)
+ nic_mask.data = priv->sh->dv_meta_mask;
+ }
if (!mask)
mask = &rte_flow_item_meta_mask;
ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
(const uint8_t *)&nic_mask,
sizeof(struct rte_flow_item_meta),
error);
- if (ret < 0)
- return ret;
- if (attr->ingress)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
- NULL,
- "pattern not supported for ingress");
- return 0;
+ return ret;
}
/**
@@ -1718,6 +1822,67 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_META action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_meta(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags __rte_unused,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_meta *conf;
+ uint32_t nic_mask = UINT32_MAX;
+ enum modify_reg reg;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg != REG_A && reg != REG_B) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ nic_mask = priv->sh->dv_meta_mask;
+ }
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_meta *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ if (conf->mask & ~nic_mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "meta data must be within reg C0");
+ if (!(conf->data & conf->mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero value has no effect");
+ return 0;
+}
+
+/**
* Validate SET_TAG action.
*
* @param[in] dev
@@ -4245,6 +4410,17 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
}
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ ret = flow_dv_validate_action_set_meta(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
action_flags,
@@ -5524,15 +5700,21 @@ struct field_modify_info modify_tcp[] = {
/**
* Add META item to matcher
*
+ * @param[in] dev
+ * The devich to configure through.
* @param[in, out] matcher
* Flow matcher.
* @param[in, out] key
* Flow matcher value.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
* @param[in] item
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_meta(void *matcher, void *key,
+flow_dv_translate_item_meta(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_attr *attr,
const struct rte_flow_item *item)
{
const struct rte_flow_item_meta *meta_m;
@@ -5542,10 +5724,34 @@ struct field_modify_info modify_tcp[] = {
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v)
- flow_dv_match_meta_reg(matcher, key, REG_A,
- rte_cpu_to_be_32(meta_v->data),
- rte_cpu_to_be_32(meta_m->data));
+ if (meta_v) {
+ enum modify_reg reg;
+ uint32_t value = meta_v->data;
+ uint32_t mask = meta_m->data;
+
+ reg = flow_dv_get_metadata_reg(dev, attr, NULL);
+ if (reg < 0)
+ return;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ value = rte_cpu_to_be_32(value);
+ mask = rte_cpu_to_be_32(mask);
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0 = rte_bsf32(msk_c0);
+
+ msk_c0 = rte_cpu_to_be_32(msk_c0);
+ value <<= shl_c0;
+ mask <<= shl_c0;
+ assert(msk_c0);
+ assert(!(~msk_c0 & mask));
+ }
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
}
/**
@@ -6313,6 +6519,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ if (flow_dv_convert_action_set_meta
+ (dev, &mhdr_res, attr,
+ (const struct rte_flow_action_set_meta *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
(dev, &mhdr_res,
@@ -6767,8 +6981,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_ITEM_MARK;
break;
case RTE_FLOW_ITEM_TYPE_META:
- flow_dv_translate_item_meta(match_mask, match_value,
- items);
+ flow_dv_translate_item_meta(dev, match_mask,
+ match_value, attr, items);
last_item = MLX5_FLOW_ITEM_METADATA;
break;
case RTE_FLOW_ITEM_TYPE_ICMP:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 16/20] net/mlx5: add meta data support to Rx datapath
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (14 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 15/20] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 17/20] net/mlx5: add simple hash table Viacheslav Ovsiienko
` (6 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
This patch moves metadata from completion descriptor
to appropriate dynamic mbuf field.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_prm.h | 6 ++++--
drivers/net/mlx5/mlx5_rxtx.c | 6 ++++++
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +++++++++++++++++++++++--
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +++++++++++++++++++++++----
5 files changed, 79 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b405cb6..a0c37c8 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -357,12 +357,14 @@ struct mlx5_cqe {
uint16_t hdr_type_etc;
uint16_t vlan_info;
uint8_t lro_num_seg;
- uint8_t rsvd3[11];
+ uint8_t rsvd3[3];
+ uint32_t flow_table_metadata;
+ uint8_t rsvd4[4];
uint32_t byte_cnt;
uint64_t timestamp;
uint32_t sop_drop_qpn;
uint16_t wqe_counter;
- uint8_t rsvd4;
+ uint8_t rsvd5;
uint8_t op_own;
};
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 887e283..b9a9bab 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -26,6 +26,7 @@
#include <rte_branch_prediction.h>
#include <rte_ether.h>
#include <rte_cycles.h>
+#include <rte_flow.h>
#include "mlx5.h"
#include "mlx5_utils.h"
@@ -1251,6 +1252,11 @@ enum mlx5_txcmp_code {
pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
}
}
+ if (rte_flow_dynf_metadata_avail() && cqe->flow_table_metadata) {
+ pkt->ol_flags |= PKT_RX_DYNF_METADATA;
+ *RTE_FLOW_DYNF_METADATA(pkt) =
+ rte_be_to_cpu_32(cqe->flow_table_metadata);
+ }
if (rxq->csum)
pkt->ol_flags |= rxq_cq_to_ol_flags(cqe);
if (rxq->vlan_strip &&
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 3be3a6d..8e79883 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -416,7 +416,6 @@
vec_cmpeq((vector unsigned int)flow_tag,
(vector unsigned int)pinfo_ft_mask)));
}
-
/*
* Merge the two fields to generate the following:
* bit[1] = l3_ok
@@ -1011,7 +1010,29 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
-
+ if (rte_flow_dynf_metadata_avail()) {
+ uint64_t flag = rte_flow_dynf_metadata_mask;
+ int offs = rte_flow_dynf_metadata_offs;
+ uint32_t metadata;
+
+ /* This code is subject for futher optimization. */
+ metadata = cq[pos].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos], offs, uint32_t *) =
+ metadata;
+ pkts[pos]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 1].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 1], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 1]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 2].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 2], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 2]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 3].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 3], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 3]->ol_flags |= metadata ? flag : 0ULL;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vec_perm(op_own, zero, len_shuf_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index e914d01..86785c7 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -687,6 +687,29 @@
container_of(p3, struct mlx5_cqe,
pkt_info)->timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(elts[pos]) =
+ container_of(p0, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 1]) =
+ container_of(p1, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 2]) =
+ container_of(p2, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 3]) =
+ container_of(p3, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos]))
+ elts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 1]))
+ elts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 2]))
+ elts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 3]))
+ elts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vbic_u16(byte_cnt, invalid_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index ca8ed41..35b7761 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -537,8 +537,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].csum);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x30);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd4[2]);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x04);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -564,8 +564,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].csum);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x30);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd4[2]);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x04);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -640,6 +640,25 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(pkts[pos]) =
+ cq[pos].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 1]) =
+ cq[pos + p1].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 2]) =
+ cq[pos + p2].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 3]) =
+ cq[pos + p3].flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos]))
+ pkts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 1]))
+ pkts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 2]))
+ pkts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 3]))
+ pkts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = _mm_shuffle_epi8(op_own, len_shuf_mask);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 17/20] net/mlx5: add simple hash table
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (15 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 16/20] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 18/20] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
` (5 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Simple hash table is added. Hash function is modulo operator and no
conflict is allowed. Key must be unique. It would be useful in managing
a resource pool having finite unique keys, e.g. flow table entry with
unique flow ID.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_utils.h | 115 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 112 insertions(+), 3 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index 97092c7..5a7de62 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -6,12 +6,13 @@
#ifndef RTE_PMD_MLX5_UTILS_H_
#define RTE_PMD_MLX5_UTILS_H_
+#include <assert.h>
+#include <errno.h>
+#include <limits.h>
#include <stddef.h>
#include <stdint.h>
#include <stdio.h>
-#include <limits.h>
-#include <assert.h>
-#include <errno.h>
+#include <sys/queue.h>
#include "mlx5_defs.h"
@@ -150,6 +151,114 @@
\
snprintf(name, sizeof(name), __VA_ARGS__)
+/*
+ * Simple Hash Table for Key-Data pair.
+ *
+ * Key must be unique. No conflict is allowed.
+ *
+ * mlx5_shtable_entry could be a part of the data structure to store, e.g.,
+ *
+ * struct DATA {
+ * struct mlx5_shtable_entry entry;
+ * custom_data_t custom_data;
+ * };
+ *
+ * And the actual hash table should be defined as,
+ *
+ * struct mlx5_shtable_bucket TABLE[TABLE_SZ];
+ *
+ * Hash function is simply modulo (%) operator for now.
+ */
+
+/* Data entry for hash table. */
+struct mlx5_shtable_entry {
+ LIST_ENTRY(mlx5_shtable_entry) next;
+ uint32_t key;
+};
+
+/* Structure for hash bucket. */
+LIST_HEAD(mlx5_shtable_bucket, mlx5_shtable_entry);
+
+/**
+ * Search an entry matching the key.
+ *
+ * @param htable
+ * Pointer to the table.
+ * @param tbl_sz
+ * Size of the table.
+ * @param key
+ * Key for the searching entry.
+ *
+ * @return
+ * Pointer of the table entry if found, NULL otherwise.
+ */
+static inline struct mlx5_shtable_entry *
+mlx5_shtable_search(struct mlx5_shtable_bucket *htable, int tbl_sz,
+ uint32_t key)
+{
+ struct mlx5_shtable_bucket *bucket;
+ struct mlx5_shtable_entry *node;
+ uint32_t idx;
+
+ idx = key % tbl_sz;
+ bucket = &htable[idx];
+ LIST_FOREACH(node, bucket, next) {
+ if (node->key == key)
+ return node;
+ }
+ return NULL;
+}
+
+/**
+ * Insert an entry.
+ *
+ * @param htable
+ * Pointer to the table.
+ * @param tbl_sz
+ * Size of the table.
+ * @param key
+ * Key for the searching entry.
+ *
+ * @return
+ * 0 if succeed, -EEXIST if conflict.
+ */
+static inline int
+mlx5_shtable_insert(struct mlx5_shtable_bucket *htable, int tbl_sz,
+ struct mlx5_shtable_entry *ent)
+{
+ struct mlx5_shtable_bucket *bucket;
+ struct mlx5_shtable_entry *node;
+ uint32_t idx;
+
+ idx = ent->key % tbl_sz;
+ bucket = &htable[idx];
+ LIST_FOREACH(node, bucket, next) {
+ if (node->key == ent->key)
+ return -EEXIST;
+ }
+ LIST_INSERT_HEAD(bucket, ent, next);
+ return 0;
+}
+
+/**
+ * Revmoe an entry from its table.
+ *
+ * @param htable
+ * Pointer to the table.
+ * @param tbl_sz
+ * Size of the table.
+ * @param key
+ * Key for the searching entry.
+ */
+static inline void
+mlx5_shtable_remove(struct mlx5_shtable_entry *ent)
+{
+ /* Check if entry is not attached. */
+ if (!ent->next.le_prev)
+ return;
+ LIST_REMOVE(ent, next);
+}
+
/**
* Return nearest power of two above input value.
*
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 18/20] net/mlx5: introduce flow splitters chain
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (16 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 17/20] net/mlx5: add simple hash table Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 19/20] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
` (4 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The mlx5 hardware has some limitations and flow might
require to be split into multiple internal subflows.
For example this is needed to provide the meter object
sharing between multiple flows or to provide metadata
register copying before final queue/rss action.
The multiple features might require several level of
splitting. For example, hairpin feature splits the
original flow into two ones - rx and tx parts. Then
RSS feature should split rx part into multiple subflows
with extended item sets. Then, metering feature might
require splitting each RSS subflow into meter jump
chain, and then metadata extensive support might
require the final subflows splitting. So, we have
to organize the chain of splitting subroutines to
abstract each level of splitting.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 116 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 106 insertions(+), 10 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c38208c..a310a88 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2782,6 +2782,103 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * The last stage of splitting chain, just creates the subflow
+ * without any modification.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in, out] sub_flow
+ * Pointer to return the created subflow, may be NULL.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_inner(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct mlx5_flow **sub_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_flow *dev_flow;
+
+ dev_flow = flow_drv_prepare(flow, attr, items, actions, error);
+ if (!dev_flow)
+ return -rte_errno;
+ dev_flow->flow = flow;
+ dev_flow->external = external;
+ /* Subflow object was created, we must include one in the list. */
+ LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
+ if (sub_flow)
+ *sub_flow = dev_flow;
+ return flow_drv_translate(dev, dev_flow, attr, items, actions, error);
+}
+
+/**
+ * Split the flow to subflow set. The splitters might be linked
+ * in the chain, like this:
+ * flow_create_split_outer() calls:
+ * flow_create_split_meter() calls:
+ * flow_create_split_metadata(meter_subflow_0) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ * flow_create_split_metadata(meter_subflow_1) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ *
+ * This provide flexible way to add new levels of flow splitting.
+ * The all of successfully created subflows are included to the
+ * parent flow dev_flow list.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_outer(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ int ret;
+
+ ret = flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ assert(ret <= 0);
+ return ret;
+}
+
+/**
* Create a flow and add it to @p list.
*
* @param dev
@@ -2899,16 +2996,15 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
buf->entry[0].pattern = (void *)(uintptr_t)items;
}
for (i = 0; i < buf->entries; ++i) {
- dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
- p_actions_rx, error);
- if (!dev_flow)
- goto error;
- dev_flow->flow = flow;
- dev_flow->external = external;
- LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
- ret = flow_drv_translate(dev, dev_flow, attr,
- buf->entry[i].pattern,
- p_actions_rx, error);
+ /*
+ * The splitter may create multiple dev_flows,
+ * depending on configuration. In the simplest
+ * case it just creates unmodified original flow.
+ */
+ ret = flow_create_split_outer(dev, flow, attr,
+ buf->entry[i].pattern,
+ p_actions_rx, external,
+ error);
if (ret < 0)
goto error;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 19/20] net/mlx5: split Rx flows to provide metadata copy
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (17 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 18/20] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 20/20] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
` (3 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Values set by MARK and SET_META actions should be carried over
to the VF representor in case of flow miss on Tx path. However,
as not all metadata registers are preserved across the different
domains (NIC Rx/Tx and E-Switch FDB), as a workaround, those
values should be carried by reg_c's which are preserved across
domains and copied to STE flow_tag (MARK) and reg_b (META) fields
in the last stage of flow steering, in order to scatter those
values to flow_tag and flow_table_metadata of CQE.
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it should
be manually set by a flow per MARK ID. For this purpose, there
should be a dedicated flow table - RX_CP_TBL and all the Rx flow
should pass by the table to properly copy values.
As the last action of Rx flow steering must be a terminal action
such as QUEUE, RSS or DROP, if a user flow has Q/RSS action, the
flow must be split in order to pass by the RX_CP_TBL. And the
remained Q/RSS action will be performed by another dedicated
action table - RX_ACT_TBL.
For example, for an ingress flow:
pattern,
actions_having_QRSS
it must be split into two flows. The first one is,
pattern,
actions_except_QRSS / copy (reg_c[2] := flow_id) / jump to RX_CP_TBL
and the second one in RX_ACT_TBL.
(if reg_c[2] == flow_id),
action_QRSS
where flow_id is uniquely allocated and managed identifier.
This patch implements the Rx flow splitting and build the RX_ACT_TBL.
Also, per each egress flow on NIC Tx, a copy action (reg_c[]= reg_a)
should be added in order to transfer metadata from WQE.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 8 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.c | 428 ++++++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 1 +
4 files changed, 436 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index fb7b94b..6359bc9 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2411,6 +2411,12 @@ struct mlx5_flow_id_pool *
err = mlx5_alloc_shared_dr(priv);
if (err)
goto error;
+ priv->qrss_id_pool = mlx5_flow_id_pool_alloc();
+ if (!priv->qrss_id_pool) {
+ DRV_LOG(ERR, "can't create flow id pool");
+ err = ENOMEM;
+ goto error;
+ }
}
/* Supported Verbs flow priority number detection. */
err = mlx5_flow_discover_priorities(eth_dev);
@@ -2463,6 +2469,8 @@ struct mlx5_flow_id_pool *
close(priv->nl_socket_rdma);
if (priv->vmwa_context)
mlx5_vlan_vmwa_exit(priv->vmwa_context);
+ if (priv->qrss_id_pool)
+ mlx5_flow_id_pool_release(priv->qrss_id_pool);
if (own_domain_id)
claim_zero(rte_eth_switch_domain_free(priv->domain_id));
rte_free(priv);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 92d445a..9c1a88a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -733,6 +733,7 @@ struct mlx5_priv {
uint32_t nl_sn; /* Netlink message sequence number. */
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
+ struct mlx5_flow_id_pool *qrss_id_pool;
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a310a88..0ed8308 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2218,6 +2218,49 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Allocate unique ID for the split Q/RSS subflows. */
+static uint32_t
+flow_qrss_get_id(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t qrss_id, ret;
+
+ ret = mlx5_flow_id_get(priv->qrss_id_pool, &qrss_id);
+ if (ret)
+ return 0;
+ assert(qrss_id);
+ return qrss_id;
+}
+
+/* Free unique ID for the split Q/RSS subflows. */
+static void
+flow_qrss_free_id(struct rte_eth_dev *dev, uint32_t qrss_id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ if (qrss_id)
+ mlx5_flow_id_release(priv->qrss_id_pool, qrss_id);
+}
+
+/**
+ * Release resource related QUEUE/RSS action split.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Flow to release id's from.
+ */
+static void
+flow_mreg_split_qrss_release(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow *dev_flow;
+
+ LIST_FOREACH(dev_flow, &flow->dev_flows, next)
+ if (dev_flow->qrss_id)
+ flow_qrss_free_id(dev, dev_flow->qrss_id);
+}
+
static int
flow_null_validate(struct rte_eth_dev *dev __rte_unused,
const struct rte_flow_attr *attr __rte_unused,
@@ -2507,6 +2550,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
const struct mlx5_flow_driver_ops *fops;
enum mlx5_flow_drv_type type = flow->drv_type;
+ flow_mreg_split_qrss_release(dev, flow);
assert(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX);
fops = flow_get_drv_ops(type);
fops->destroy(dev, flow);
@@ -2577,6 +2621,41 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Get QUEUE/RSS action from the action list.
+ *
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] qrss
+ * Pointer to the return pointer.
+ * @param[out] qrss_type
+ * Pointer to the action type to return. RTE_FLOW_ACTION_TYPE_END is returned
+ * if no QUEUE/RSS is found.
+ *
+ * @return
+ * Total number of actions.
+ */
+static int
+flow_parse_qrss_action(const struct rte_flow_action actions[],
+ const struct rte_flow_action **qrss)
+{
+ int actions_n = 0;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ *qrss = actions;
+ break;
+ default:
+ break;
+ }
+ actions_n++;
+ }
+ /* Count RTE_FLOW_ACTION_TYPE_END. */
+ return actions_n + 1;
+}
+
+/**
* Check if the flow should be splited due to hairpin.
* The reason for the split is that in current HW we can't
* support encap on Rx, so if a flow have encap we move it
@@ -2828,6 +2907,351 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Split action list having QUEUE/RSS for metadata register copy.
+ *
+ * Once Q/RSS action is detected in user's action list, the flow action
+ * should be split in order to copy metadata registers, which will happen in
+ * RX_CP_TBL like,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * The Q/RSS action will be performed on RX_ACT_TBL after passing by RX_CP_TBL.
+ * This is because the last action of each flow must be a terminal action
+ * (QUEUE, RSS or DROP).
+ *
+ * Flow ID must be allocated to identify actions in the RX_ACT_TBL and it is
+ * stored and kept in the mlx5_flow structure per each sub_flow.
+ *
+ * The Q/RSS action is replaced with,
+ * - SET_TAG, setting the allocated flow ID to reg_c[2].
+ * And the following JUMP action is added at the end,
+ * - JUMP, to RX_CP_TBL.
+ *
+ * A flow to perform remained Q/RSS action will be created in RX_ACT_TBL by
+ * flow_create_split_metadata() routine. The flow will look like,
+ * - If flow ID matches (reg_c[2]), perform Q/RSS.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] split_actions
+ * Pointer to store split actions to jump to CP_TBL.
+ * @param[in] actions
+ * Pointer to the list of original flow actions.
+ * @param[in] qrss
+ * Pointer to the Q/RSS action.
+ * @param[in] actions_n
+ * Number of original actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * non-zero unique flow_id on success, otherwise 0 and
+ * error/rte_error are set.
+ */
+static uint32_t
+flow_mreg_split_qrss_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *split_actions,
+ const struct rte_flow_action *actions,
+ const struct rte_flow_action *qrss,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_rte_flow_action_set_tag *set_tag;
+ struct rte_flow_action_jump *jump;
+ const int qrss_idx = qrss - actions;
+ uint32_t flow_id;
+ int ret = 0;
+
+ /*
+ * Given actions will be split
+ * - Replace QUEUE/RSS action with SET_TAG to set flow ID.
+ * - Add jump to mreg CP_TBL.
+ * As a result, there will be one more action.
+ */
+ ++actions_n;
+ /*
+ * Allocate the new subflow ID. This one is unique within
+ * device and not shared with representors. Otherwise,
+ * we would have to resolve multi-thread access synch
+ * issue. Each flow on the shared device is appended
+ * with source vport identifier, so the resulting
+ * flows will be unique in the shared (by master and
+ * representors) domain even if they have coinciding
+ * IDs.
+ */
+ flow_id = flow_qrss_get_id(dev);
+ if (!flow_id)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "can't allocate id "
+ "for split Q/RSS subflow");
+ /* Internal SET_TAG action to set flow ID. */
+ set_tag = (void *)(split_actions + actions_n);
+ *set_tag = (struct mlx5_rte_flow_action_set_tag){
+ .data = flow_id,
+ };
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ set_tag->id = ret;
+ /* JUMP action to jump to mreg copy table (CP_TBL). */
+ jump = (void *)(set_tag + 1);
+ *jump = (struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ };
+ /* Construct new actions array. */
+ memcpy(split_actions, actions, sizeof(*split_actions) * actions_n);
+ /* Replace QUEUE/RSS action. */
+ split_actions[qrss_idx] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ .conf = set_tag,
+ };
+ split_actions[actions_n - 2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = jump,
+ };
+ split_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return flow_id;
+}
+
+/**
+ * Extend the given action list for Tx metadata copy.
+ *
+ * Copy the given action list to the ext_actions and add flow metadata register
+ * copy action in order to copy reg_a set by WQE to reg_c[0].
+ *
+ * @param[out] ext_actions
+ * Pointer to the extended action list.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[in] actions_n
+ * Number of actions in the list.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_mreg_tx_copy_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *ext_actions,
+ const struct rte_flow_action *actions,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_flow_action_copy_mreg *cp_mreg =
+ (struct mlx5_flow_action_copy_mreg *)
+ (ext_actions + actions_n + 1);
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->dst = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_TX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->src = ret;
+ memcpy(ext_actions, actions,
+ sizeof(*ext_actions) * actions_n);
+ ext_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = cp_mreg,
+ };
+ ext_actions[actions_n] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return 0;
+}
+
+/**
+ * The splitting for metadata feature.
+ *
+ * - Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ *
+ * - All the actions on NIC Tx should have a mreg copy action to
+ * copy reg_a from WQE to reg_c[0].
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_metadata(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action *qrss = NULL;
+ struct rte_flow_action *ext_actions = NULL;
+ struct mlx5_flow *dev_flow = NULL;
+ uint32_t qrss_id = 0;
+ size_t act_size;
+ int actions_n;
+ int ret;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev))
+ return flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ actions_n = flow_parse_qrss_action(actions, &qrss);
+ if (qrss) {
+ /* Exclude hairpin flows from splitting. */
+ if (qrss->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *queue;
+
+ queue = qrss->conf;
+ if (mlx5_rxq_get_type(dev, queue->index) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ } else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ const struct rte_flow_action_rss *rss;
+
+ rss = qrss->conf;
+ if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ }
+ }
+ if (qrss) {
+ /*
+ * Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct rte_flow_action_set_tag) +
+ sizeof(struct rte_flow_action_jump);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /*
+ * Create the new actions list with removed Q/RSS action
+ * and appended set tag and jump to register copy table
+ * (RX_CP_TBL). We should preallocate unique tag ID here
+ * in advance, because it is needed for set tag action.
+ */
+ qrss_id = flow_mreg_split_qrss_prep(dev, ext_actions, actions,
+ qrss, actions_n, error);
+ if (!qrss_id) {
+ ret = -rte_errno;
+ goto exit;
+ }
+ } else if (attr->egress && !attr->transfer) {
+ /*
+ * All the actions on NIC Tx should have a metadata register
+ * copy action to copy reg_a from WQE to reg_c[meta]
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct mlx5_flow_action_copy_mreg);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /* Create the action list appended with copy register. */
+ ret = flow_mreg_tx_copy_prep(dev, ext_actions, actions,
+ actions_n, error);
+ if (ret < 0)
+ goto exit;
+ }
+ /* Add the unmodified original or prefix subflow. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow, attr, items,
+ ext_actions ? ext_actions : actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ if (qrss_id) {
+ const struct rte_flow_attr q_attr = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ .ingress = 1,
+ };
+ /* Internal PMD action to set register. */
+ struct mlx5_rte_flow_item_tag q_tag_spec = {
+ .data = qrss_id,
+ .id = 0,
+ };
+ struct rte_flow_item q_items[] = {
+ {
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &q_tag_spec,
+ .last = NULL,
+ .mask = NULL,
+ },
+ {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action q_actions[] = {
+ {
+ .type = qrss->type,
+ .conf = qrss->conf,
+ },
+ {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ uint64_t hash_fields = dev_flow->hash_fields;
+ /*
+ * Put unique id in prefix flow due to it is destroyed after
+ * prefix flow and id will be freed after there is no actual
+ * flows with this id and identifier reallocation becomes
+ * possible (for example, for other flows in other threads).
+ */
+ dev_flow->qrss_id = qrss_id;
+ qrss_id = 0;
+ dev_flow = NULL;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ goto exit;
+ q_tag_spec.id = ret;
+ /* Add suffix subflow to execute Q/RSS. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow,
+ &q_attr, q_items, q_actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ dev_flow->hash_fields = hash_fields;
+ }
+
+exit:
+ /*
+ * We do not destroy the partially created sub_flows in case of error.
+ * These ones are included into parent flow list and will be destroyed
+ * by flow_drv_destroy.
+ */
+ flow_qrss_free_id(dev, qrss_id);
+ rte_free(ext_actions);
+ return ret;
+}
+
+/**
* Split the flow to subflow set. The splitters might be linked
* in the chain, like this:
* flow_create_split_outer() calls:
@@ -2872,8 +3296,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
{
int ret;
- ret = flow_create_split_inner(dev, flow, NULL, attr, items,
- actions, external, error);
+ ret = flow_create_split_metadata(dev, flow, attr, items,
+ actions, external, error);
assert(ret <= 0);
return ret;
}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index ef16aef..c71938b 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -500,6 +500,7 @@ struct mlx5_flow {
#endif
struct mlx5_flow_verbs verbs;
};
+ uint32_t qrss_id; /**< Uniqie Q/RSS suffix subflow tag. */
bool external; /**< true if the flow is created external to PMD. */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 20/20] net/mlx5: add metadata register copy table
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (18 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 19/20] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
@ 2019-11-05 8:01 ` Viacheslav Ovsiienko
2019-11-05 9:35 ` [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Matan Azrad
` (2 subsequent siblings)
22 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-05 8:01 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it
should be manually set by a flow per each unique MARK ID. For
this purpose, there should be a dedicated flow table -
RX_CP_TBL and all the Rx flow should pass by the table
to properly copy values from the register to flow tag field.
And for each MARK action, a copy flow should be added
to RX_CP_TBL according to the MARK ID like:
(if reg_c[mark] == mark_id),
flow_tag := mark_id / reg_b := reg_c[meta] / jump to RX_ACT_TBL
For SET_META action, there can be only one default flow like:
reg_b := reg_c[meta] / jump to RX_ACT_TBL
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.h | 7 +-
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_flow.c | 432 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 19 ++
drivers/net/mlx5/mlx5_flow_dv.c | 10 +-
5 files changed, 464 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9c1a88a..470778b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -567,8 +567,9 @@ struct mlx5_flow_tbl_resource {
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
/* Reserve the last two tables for metadata register copy. */
#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
-#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
- (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP (MLX5_MAX_TABLES - 2)
+/* Tables for metering splits should be added here. */
+#define MLX5_MAX_TABLES_EXTERNAL (MLX5_MAX_TABLES - 3)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -734,6 +735,8 @@ struct mlx5_priv {
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
struct mlx5_flow_id_pool *qrss_id_pool;
+ struct mlx5_shtable_bucket mreg_cp_tbl[MLX5_FLOW_MREG_HTABLE_SZ];
+ /* Hash table of Rx metadata register copy table. */
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index a77c430..fa2009c 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -145,6 +145,9 @@
#define MLX5_XMETA_MODE_META16 1
#define MLX5_XMETA_MODE_META32 2
+/* Size of the simple hash table for metadata register table. */
+#define MLX5_FLOW_MREG_HTABLE_SZ 1024
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 0ed8308..2f6cd92 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -667,7 +667,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ /*
+ * To support metadata register copy on Tx loopback,
+ * this must be always enabled (metadata may arive
+ * from other port - not from local flows only.
+ */
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->rxq.mark = 1;
rxq_ctrl->flow_mark_n++;
}
@@ -731,7 +741,12 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->flow_mark_n--;
rxq_ctrl->rxq.mark = !!rxq_ctrl->flow_mark_n;
}
@@ -2727,6 +2742,392 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Declare flow create/destroy prototype in advance. */
+static struct rte_flow *
+flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error);
+
+static void
+flow_list_destroy(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ struct rte_flow *flow);
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * As mark_id is unique, if there's already a registered flow for the mark_id,
+ * return by increasing the reference counter of the resource. Otherwise, create
+ * the resource (mcp_res) and flow.
+ *
+ * Flow looks like,
+ * - If ingress port is ANY and reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For default flow (zero mark_id), flow is like,
+ * - If ingress port is ANY,
+ * reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param mark_id
+ * ID of MARK action, zero means default flow for META.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * Associated resource on success, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_flow_mreg_copy_resource *
+flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .ingress = 1,
+ };
+ struct mlx5_rte_flow_item_tag tag_spec = {
+ .data = mark_id,
+ };
+ struct rte_flow_item items[] = {
+ [1] = { .type = RTE_FLOW_ITEM_TYPE_END, },
+ };
+ struct rte_flow_action_mark ftag = {
+ .id = mark_id,
+ };
+ struct mlx5_flow_action_copy_mreg cp_mreg = {
+ .dst = REG_B,
+ .src = 0,
+ };
+ struct rte_flow_action_jump jump = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ };
+ struct rte_flow_action actions[] = {
+ [3] = { .type = RTE_FLOW_ACTION_TYPE_END, },
+ };
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ int ret;
+
+ /* Fill the register fileds in the flow. */
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return NULL;
+ tag_spec.id = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return NULL;
+ cp_mreg.src = ret;
+ /* Check if already registered. */
+ mcp_res = (void *)mlx5_shtable_search(priv->mreg_cp_tbl,
+ RTE_DIM(priv->mreg_cp_tbl),
+ mark_id);
+ if (mcp_res) {
+ /* For non-default rule. */
+ if (mark_id)
+ mcp_res->refcnt++;
+ assert(mark_id || mcp_res->refcnt == 1);
+ return mcp_res;
+ }
+ /* Provide the full width of FLAG specific value. */
+ if (mark_id == (priv->sh->dv_regc0_mask & MLX5_FLOW_MARK_DEFAULT))
+ tag_spec.data = MLX5_FLOW_MARK_DEFAULT;
+ /* Build a new flow. */
+ if (mark_id) {
+ items[0] = (struct rte_flow_item){
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &tag_spec,
+ };
+ items[1] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_MARK,
+ .conf = &ftag,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[3] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ } else {
+ /* Default rule, wildcard match. */
+ attr.priority = MLX5_FLOW_PRIO_RSVD;
+ items[0] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ }
+ /* Build a new entry. */
+ mcp_res = rte_zmalloc(__func__, sizeof(*mcp_res), 0);
+ if (!mcp_res) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ /*
+ * The copy Flows are not included in any list. There
+ * ones are referenced from other Flows and can not
+ * be applied, removed, deleted in ardbitrary order
+ * by list traversing.
+ */
+ mcp_res->flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, error);
+ if (!mcp_res->flow)
+ goto error;
+ mcp_res->refcnt++;
+ mcp_res->htbl_ent.key = mark_id;
+ ret = mlx5_shtable_insert(priv->mreg_cp_tbl,
+ RTE_DIM(priv->mreg_cp_tbl),
+ &mcp_res->htbl_ent);
+ assert(!ret);
+ return mcp_res;
+error:
+ rte_free(mcp_res);
+ return NULL;
+}
+
+/**
+ * Release flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_del_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+
+ if (!mcp_res)
+ return;
+ if (flow->copy_applied) {
+ assert(mcp_res->appcnt);
+ flow->copy_applied = 0;
+ --mcp_res->appcnt;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+ }
+ /*
+ * We do not check availability of metadata registers here,
+ * because copy resources are allocated in this case.
+ */
+ if (--mcp_res->refcnt)
+ return;
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_shtable_remove(&mcp_res->htbl_ent);
+ rte_free(mcp_res);
+ flow->mreg_copy = NULL;
+}
+
+/**
+ * Start flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_start_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+ int ret;
+
+ if (!mcp_res || flow->copy_applied)
+ return 0;
+ if (!mcp_res->appcnt) {
+ ret = flow_drv_apply(dev, mcp_res->flow, NULL);
+ if (ret)
+ return ret;
+ }
+ ++mcp_res->appcnt;
+ flow->copy_applied = 1;
+ return 0;
+}
+
+/**
+ * Stop flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_stop_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+
+ if (!mcp_res || !flow->copy_applied)
+ return;
+ assert(mcp_res->appcnt);
+ --mcp_res->appcnt;
+ flow->copy_applied = 0;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+}
+
+/**
+ * Remove the default copy action from RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+flow_mreg_del_default_copy_action(struct rte_eth_dev *dev)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ /* Check if default flow is registered. */
+ mcp_res = (void *)mlx5_shtable_search(priv->mreg_cp_tbl,
+ RTE_DIM(priv->mreg_cp_tbl), 0);
+ if (!mcp_res)
+ return;
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_shtable_remove(&mcp_res->htbl_ent);
+ rte_free(mcp_res);
+}
+
+/**
+ * Add the default copy action in in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 for success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_add_default_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!priv->config.dv_flow_en ||
+ !priv->config.dv_xmeta_en ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ mcp_res = flow_mreg_add_copy_action(dev, 0, error);
+ if (!mcp_res)
+ return -rte_errno;
+ return 0;
+}
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * All the flow having Q/RSS action should be split by
+ * flow_mreg_split_qrss_prep() to pass by RX_CP_TBL. A flow in the RX_CP_TBL
+ * performs the following,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * As CQE's flow_tag is not a register, it can't be simply copied from reg_c[1]
+ * but there should be a flow per each MARK ID set by MARK action.
+ *
+ * For the aforementioned reason, if there's a MARK action in flow's action
+ * list, a corresponding flow should be added to the RX_CP_TBL in order to copy
+ * the MARK ID to CQE's flow_tag like,
+ * - If reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For SET_META action which stores value in reg_c[0], as the destination is
+ * also a flow metadata register (reg_b), adding a default flow is enough. Zero
+ * MARK ID means the default flow. The default flow looks like,
+ * - For all flow, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Pointer to flow structure.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_update_copy_table(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *actions,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ const struct rte_flow_action_mark *mark;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ !config->dv_xmeta_en ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ /* Find MARK action. */
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mcp_res = flow_mreg_add_copy_action
+ (dev, MLX5_FLOW_MARK_DEFAULT, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ mark = (const struct rte_flow_action_mark *)
+ actions->conf;
+ mcp_res =
+ flow_mreg_add_copy_action(dev, mark->id, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ default:
+ break;
+ }
+ }
+ return 0;
+}
+
#define MLX5_MAX_SPLIT_ACTIONS 24
#define MLX5_MAX_SPLIT_ITEMS 24
@@ -3450,6 +3851,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
+ /*
+ * Update the metadata register copy table. If extensive
+ * metadata feature is enabled and registers are supported
+ * we might create the extra rte_flow for each unique
+ * MARK/FLAG action ID.
+ */
+ if (external || attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP) {
+ ret = flow_mreg_update_copy_table(dev, flow, actions, error);
+ if (ret)
+ goto error;
+ }
if (dev->data->dev_started) {
ret = flow_drv_apply(dev, flow, error);
if (ret < 0)
@@ -3465,6 +3877,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
hairpin_id);
return NULL;
error:
+ assert(flow);
+ flow_mreg_del_copy_action(dev, flow);
ret = rte_errno; /* Save rte_errno before cleanup. */
if (flow->hairpin_flow_id)
mlx5_flow_id_release(priv->sh->flow_id_pool,
@@ -3573,6 +3987,7 @@ struct rte_flow *
flow_drv_destroy(dev, flow);
if (list)
TAILQ_REMOVE(list, flow, next);
+ flow_mreg_del_copy_action(dev, flow);
rte_free(flow->fdir);
rte_free(flow);
}
@@ -3609,8 +4024,11 @@ struct rte_flow *
{
struct rte_flow *flow;
- TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next)
+ TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
flow_drv_remove(dev, flow);
+ flow_mreg_stop_copy_action(dev, flow);
+ }
+ flow_mreg_del_default_copy_action(dev);
flow_rxq_flags_clear(dev);
}
@@ -3632,7 +4050,15 @@ struct rte_flow *
struct rte_flow_error error;
int ret = 0;
+ /* Make sure default copy action (reg_c[0] -> reg_b) is created. */
+ ret = flow_mreg_add_default_copy_action(dev, &error);
+ if (ret < 0)
+ return -rte_errno;
+ /* Apply Flows created by application. */
TAILQ_FOREACH(flow, list, next) {
+ ret = flow_mreg_start_copy_action(dev, flow);
+ if (ret < 0)
+ goto error;
ret = flow_drv_apply(dev, flow, &error);
if (ret < 0)
goto error;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c71938b..1fa5fb9 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -38,6 +38,7 @@ enum mlx5_rte_flow_item_type {
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_MARK,
MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
@@ -417,6 +418,21 @@ struct mlx5_flow_dv_push_vlan_action_resource {
rte_be32_t vlan_tag; /**< VLAN tag value. */
};
+/* Metadata register copy table entry. */
+struct mlx5_flow_mreg_copy_resource {
+ /*
+ * Hash table for copy table.
+ * - Key is 32-bit MARK action ID.
+ * - MUST be the first entry.
+ */
+ struct mlx5_shtable_entry htbl_ent;
+ LIST_ENTRY(mlx5_flow_mreg_copy_resource) next;
+ /* List entry for device flows. */
+ uint32_t refcnt; /* Reference counter. */
+ uint32_t appcnt; /* Apply/Remove counter. */
+ struct rte_flow *flow; /* Built flow for copy. */
+};
+
/*
* Max number of actions per DV flow.
* See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED
@@ -510,10 +526,13 @@ struct rte_flow {
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
+ struct mlx5_flow_mreg_copy_resource *mreg_copy;
+ /**< pointer to metadata register copy table resource. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
+ uint32_t copy_applied:1; /**< The MARK copy Flow os applied. */
};
typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 19f58cb..6a7e612 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4064,8 +4064,11 @@ struct field_modify_info modify_tcp[] = {
NULL,
"groups are not supported");
#else
- uint32_t max_group = attributes->transfer ? MLX5_MAX_TABLES_FDB :
- MLX5_MAX_TABLES;
+ uint32_t max_group = attributes->transfer ?
+ MLX5_MAX_TABLES_FDB :
+ external ?
+ MLX5_MAX_TABLES_EXTERNAL :
+ MLX5_MAX_TABLES;
uint32_t table;
int ret;
@@ -4672,6 +4675,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
@@ -6508,6 +6512,8 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
break;
}
+ /* Fall-through */
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
/* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (19 preceding siblings ...)
2019-11-05 8:01 ` [dpdk-dev] [PATCH 20/20] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
@ 2019-11-05 9:35 ` Matan Azrad
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
22 siblings, 0 replies; 64+ messages in thread
From: Matan Azrad @ 2019-11-05 9:35 UTC (permalink / raw)
To: Slava Ovsiienko, dev
Cc: Raslan Darawsheh, Thomas Monjalon, Ori Kam, Yongseok Koh
From: Viacheslav Ovsiienko
> The modern networks operate on the base of the packet switching
> approach, and in-network environment data are transmitted as the packets.
> Within the host besides the data, actually transmitted on the wire as packets,
> there might some out-of-band data helping to process packets. These data
> are named as metadata, exist on a per-packet basis and are attached to each
> packet as some extra dedicated storage (in meaning it besides the packet
> data itself).
>
> In the DPDK network data are represented as mbuf structure chains and go
> along the application/DPDK datapath. From the other side, DPDK provides
> Flow API to control the flow engine. Being precise, there are two kinds of
> metadata in the DPDK, the one is purely software metadata (as fields of
> mbuf - flags, packet types, data length, etc.), and the other is metadata
> within flow engine.
> In this scope, we cover the second type (flow engine metadata) only.
>
> The flow engine metadata is some extra data, supported on the per-packet
> basis and usually handled by hardware inside flow engine.
>
> Initially, there were proposed two metadata related actions:
>
> - RTE_FLOW_ACTION_TYPE_FLAG
> - RTE_FLOW_ACTION_TYPE_MARK
>
> These actions set the special flag in the packet metadata, MARK action stores
> some specified value in the metadata storage, and, on the packet receiving
> PMD puts the flag and value to the mbuf and applications can see the packet
> was threated inside flow engine according to the appropriate RTE flow(s).
> MARK and FLAG are like some kind of gateway to transfer some per-packet
> information from the flow engine to the application via receiving datapath.
> Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It
> allows us to extend the flow match pattern with the capability to match the
> metadata values set by MARK/FLAG actions on other flows.
>
> From the datapath point of view, the MARK and FLAG are related to the
> receiving side only. It would useful to have the same gateway on the
> transmitting side and there was the feature of type
> RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill the field
> in mbuf and this value will be transferred to some field in the packet
> metadata inside the flow engine.
>
> It did not matter whether these metadata fields are shared because of
> MARK and META items belonged to different domains (receiving and
> transmitting) and could be vendor-specific.
>
> So far, so good, DPDK proposes some entities to control metadata inside the
> flow engine and gateways to exchange these values on a per-packet basis via
> datapath.
>
> As we can see, the MARK and META means are not symmetric, there is
> absent action which would allow us to set META value on the transmitting
> path. So, the action of type:
>
> - RTE_FLOW_ACTION_TYPE_SET_META is proposed.
>
> The next, applications raise the new requirements for packet metadata. The
> flow engines are getting more complex, internal switches are introduced,
> multiple ports might be supported within the same flow engine namespace.
> From the DPDK points of view, it means the packets might be sent on one
> eth_dev port and received on the other one, and the packet path inside the
> flow engine entirely belongs to the same hardware device. The simplest
> example is SR-IOV with PF, VFs and the representors. And there is a brilliant
> opportunity to provide some out-of-band channel to transfer some extra
> data from one port to another one, besides the packet data itself. And
> applications would like to use this opportunity.
>
> Improving the metadata definitions it is proposed to:
> - suppose MARK and META metadata fields not shared, dedicated
> - extend applying area for MARK and META items/actions for all
> flow engine domains - transmitting and receiving
> - allow MARK and META metadata to be preserved while crossing
> the flow domains (from transmit origin through flow database
> inside (E-)switch to receiving side domain), in simple words,
> to allow metadata to convey the packet thought entire flow
> engine space.
>
> Another new proposed feature is transient per-packet storage inside the
> flow engine. It might have a lot of use cases.
> For example, if there is VXLAN tunneled traffic and some flow performs
> VXLAN decapsulation and wishes to save information regarding the dropped
> header it could use this temporary transient storage. The tools to maintain
> this storage are traditional (for DPDK rte_flow API):
>
> - RTE_FLOW_ACTION_TYPE_SET_TAG - to set value
> - RTE_FLOW_ACTION_TYPE_SET_ITEM - to match on
>
> There are primary properties of the proposed storage:
> - the storage is presented as an array of 32-bit opaque values
> - the size of array (or even bitmap of available indices) is
> vendor specific and is subject to run-time trial
> - it is transient, it means it exists only inside flow engine,
> no gateways for interacting with datapath, applications have
> way neither to specify these data on transmitting nor to get
> these data on receiving
>
> This patchset implements the abovementioned extensive metadata feature
> in the mlx5 PMD.
>
> The patchset must be applied after public RTE API updates:
>
> [1]
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F62354%2F&data=02%7C01%7Cmatan%40mella
> nox.com%7C3d761a21d7b24f6cc3f908d761c67b89%7Ca652971c7d2e4d9ba6a4
> d149256f461b%7C0%7C0%7C637085377412009134&sdata=J4RsC9w8BNn
> %2BawnLG%2F4UJPzeo%2BZ%2BTBfVxkZ6j4gheg0%3D&reserved=0
> [2]
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F62355%2F&data=02%7C01%7Cmatan%40mella
> nox.com%7C3d761a21d7b24f6cc3f908d761c67b89%7Ca652971c7d2e4d9ba6a4
> d149256f461b%7C0%7C0%7C637085377412009134&sdata=AXgzWWkU8I
> yr3zm5z%2FPW%2B9p%2Fein7d87Jdr2dclqe71s%3D&reserved=0
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
For all the series:
Acked-by: Matan Azrad <matan@mellanox.com>
> Viacheslav Ovsiienko (20):
> net/mlx5: convert internal tag endianness
> net/mlx5: update modify header action translator
> net/mlx5: add metadata register copy
> net/mlx5: refactor flow structure
> net/mlx5: update flow functions
> net/mlx5: update meta register matcher set
> net/mlx5: rename structure and function
> net/mlx5: check metadata registers availability
> net/mlx5: add devarg for extensive metadata support
> net/mlx5: adjust shared register according to mask
> net/mlx5: check the maximal modify actions number
> net/mlx5: update metadata register id query
> net/mlx5: add flow tag support
> net/mlx5: extend flow mark support
> net/mlx5: extend flow meta data support
> net/mlx5: add meta data support to Rx datapath
> net/mlx5: add simple hash table
> net/mlx5: introduce flow splitters chain
> net/mlx5: split Rx flows to provide metadata copy
> net/mlx5: add metadata register copy table
>
> doc/guides/nics/mlx5.rst | 49 +
> drivers/net/mlx5/mlx5.c | 135 ++-
> drivers/net/mlx5/mlx5.h | 19 +-
> drivers/net/mlx5/mlx5_defs.h | 7 +
> drivers/net/mlx5/mlx5_ethdev.c | 8 +-
> drivers/net/mlx5/mlx5_flow.c | 1178 ++++++++++++++++++++++-
> drivers/net/mlx5/mlx5_flow.h | 108 ++-
> drivers/net/mlx5/mlx5_flow_dv.c | 1544
> ++++++++++++++++++++++++------
> drivers/net/mlx5/mlx5_flow_verbs.c | 55 +-
> drivers/net/mlx5/mlx5_prm.h | 45 +-
> drivers/net/mlx5/mlx5_rxtx.c | 6 +
> drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +-
> drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +
> drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +-
> drivers/net/mlx5/mlx5_utils.h | 115 ++-
> 15 files changed, 2922 insertions(+), 422 deletions(-)
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 00/19] net/mlx5: implement extensive metadata feature
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (20 preceding siblings ...)
2019-11-05 9:35 ` [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Matan Azrad
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
` (18 more replies)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
22 siblings, 19 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The modern networks operate on the base of the packet switching
approach, and in-network environment data are transmitted as the
packets. Within the host besides the data, actually transmitted
on the wire as packets, there might some out-of-band data helping
to process packets. These data are named as metadata, exist on
a per-packet basis and are attached to each packet as some extra
dedicated storage (in meaning it besides the packet data itself).
In the DPDK network data are represented as mbuf structure chains
and go along the application/DPDK datapath. From the other side,
DPDK provides Flow API to control the flow engine. Being precise,
there are two kinds of metadata in the DPDK, the one is purely
software metadata (as fields of mbuf - flags, packet types, data
length, etc.), and the other is metadata within flow engine.
In this scope, we cover the second type (flow engine metadata) only.
The flow engine metadata is some extra data, supported on the
per-packet basis and usually handled by hardware inside flow
engine.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK
action stores some specified value in the metadata storage, and,
on the packet receiving PMD puts the flag and value to the mbuf
and applications can see the packet was threated inside flow engine
according to the appropriate RTE flow(s). MARK and FLAG are like
some kind of gateway to transfer some per-packet information from
the flow engine to the application via receiving datapath. Also,
there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It
allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other
flows.
From the datapath point of view, the MARK and FLAG are related
to the receiving side only. It would useful to have the same gateway
on the transmitting side and there was the feature of type
RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill
the field in mbuf and this value will be transferred to some field
in the packet metadata inside the flow engine.
It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata
inside the flow engine and gateways to exchange these values on
a per-packet basis via datapath.
As we can see, the MARK and META means are not symmetric, there
is absent action which would allow us to set META value on the
transmitting path. So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META is proposed.
The next, applications raise the new requirements for packet
metadata. The flow engines are getting more complex, internal
switches are introduced, multiple ports might be supported within
the same flow engine namespace. From the DPDK points of view, it
means the packets might be sent on one eth_dev port and received
on the other one, and the packet path inside the flow engine
entirely belongs to the same hardware device. The simplest example
is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to
transfer some extra data from one port to another one, besides
the packet data itself. And applications would like to use this
opportunity.
Improving the metadata definitions it is proposed to:
- suppose MARK and META metadata fields not shared, dedicated
- extend applying area for MARK and META items/actions for all
flow engine domains - transmitting and receiving
- allow MARK and META metadata to be preserved while crossing
the flow domains (from transmit origin through flow database
inside (E-)switch to receiving side domain), in simple words,
to allow metadata to convey the packet thought entire flow
engine space.
Another new proposed feature is transient per-packet storage
inside the flow engine. It might have a lot of use cases.
For example, if there is VXLAN tunneled traffic and some flow
performs VXLAN decapsulation and wishes to save information
regarding the dropped header it could use this temporary
transient storage. The tools to maintain this storage are
traditional (for DPDK rte_flow API):
- RTE_FLOW_ACTION_TYPE_SET_TAG - to set value
- RTE_FLOW_ACTION_TYPE_SET_ITEM - to match on
There are primary properties of the proposed storage:
- the storage is presented as an array of 32-bit opaque values
- the size of array (or even bitmap of available indices) is
vendor specific and is subject to run-time trial
- it is transient, it means it exists only inside flow engine,
no gateways for interacting with datapath, applications have
way neither to specify these data on transmitting nor to get
these data on receiving
This patchset implements the abovementioned extensive metadata
feature in the mlx5 PMD.
The patchset must be applied after hashed list patch:
[1] http://patches.dpdk.org/patch/62539/
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
v2: - fix: metadata endianess
- fix: infinite loop in header modify update routine
- fix: reg_c_3 is reserved for split shared tag
- fix: vport mask and value endianess
- hash list implementation removed
- rebased
v1: http://patches.dpdk.org/cover/62419/
Viacheslav Ovsiienko (19):
net/mlx5: convert internal tag endianness
net/mlx5: update modify header action translator
net/mlx5: add metadata register copy
net/mlx5: refactor flow structure
net/mlx5: update flow functions
net/mlx5: update meta register matcher set
net/mlx5: rename structure and function
net/mlx5: check metadata registers availability
net/mlx5: add devarg for extensive metadata support
net/mlx5: adjust shared register according to mask
net/mlx5: check the maximal modify actions number
net/mlx5: update metadata register id query
net/mlx5: add flow tag support
net/mlx5: extend flow mark support
net/mlx5: extend flow meta data support
net/mlx5: add meta data support to Rx datapath
net/mlx5: introduce flow splitters chain
net/mlx5: split Rx flows to provide metadata copy
net/mlx5: add metadata register copy table
doc/guides/nics/mlx5.rst | 49 +
drivers/net/mlx5/mlx5.c | 152 ++-
drivers/net/mlx5/mlx5.h | 19 +-
drivers/net/mlx5/mlx5_defs.h | 8 +
drivers/net/mlx5/mlx5_ethdev.c | 8 +-
drivers/net/mlx5/mlx5_flow.c | 1201 ++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 108 ++-
drivers/net/mlx5/mlx5_flow_dv.c | 1566 ++++++++++++++++++++++++------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 +-
drivers/net/mlx5/mlx5_prm.h | 45 +-
drivers/net/mlx5/mlx5_rxtx.c | 5 +
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +-
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +-
14 files changed, 2868 insertions(+), 423 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 01/19] net/mlx5: convert internal tag endianness
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
` (17 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
Public API RTE_FLOW_ACTION_TYPE_TAG and RTE_FLOW_ITEM_TYPE_TAG
present data in host-endian format, as all metadata related
entities. The internal mlx5 tag related action and item should
use the same endianness to be conformed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 6 +++---
drivers/net/mlx5/mlx5_flow.h | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b4b08f4..5408797 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2707,7 +2707,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
actions_rx++;
set_tag = (void *)actions_rx;
set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
- set_tag->data = rte_cpu_to_be_32(*flow_id);
+ set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action));
@@ -2715,8 +2715,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item = pattern_tx;
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
- tag_item->data = rte_cpu_to_be_32(*flow_id);
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, &error);
+ tag_item->data = *flow_id;
+ tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7559810..8cc6c47 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -56,13 +56,13 @@ enum mlx5_rte_flow_action_type {
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Matches on source queue. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 02/19] net/mlx5: update modify header action translator
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
` (16 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
When composing device command for modify header action, provided mask
should be taken more accurate into account thus length and offset
in action should be set accordingly at precise bit-wise boundaries.
For the future use, metadata register copy action is also added.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 150 ++++++++++++++++++++++++++++++----------
drivers/net/mlx5/mlx5_prm.h | 18 +++--
2 files changed, 128 insertions(+), 40 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 42c265f..6a3850a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -240,12 +240,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Fetch 1, 2, 3 or 4 byte field from the byte array
+ * and return as unsigned integer in host-endian format.
+ *
+ * @param[in] data
+ * Pointer to data array.
+ * @param[in] size
+ * Size of field to extract.
+ *
+ * @return
+ * converted field in host endian format.
+ */
+static inline uint32_t
+flow_dv_fetch_field(const uint8_t *data, uint32_t size)
+{
+ uint32_t ret;
+
+ switch (size) {
+ case 1:
+ ret = *data;
+ break;
+ case 2:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ break;
+ case 3:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ ret = (ret << 8) | *(data + sizeof(uint16_t));
+ break;
+ case 4:
+ ret = rte_be_to_cpu_32(*(const unaligned_uint32_t *)data);
+ break;
+ default:
+ assert(false);
+ ret = 0;
+ break;
+ }
+ return ret;
+}
+
+/**
* Convert modify-header action to DV specification.
*
+ * Data length of each action is determined by provided field description
+ * and the item mask. Data bit offset and width of each action is determined
+ * by provided item mask.
+ *
* @param[in] item
* Pointer to item specification.
* @param[in] field
* Pointer to field modification information.
+ * For MLX5_MODIFICATION_TYPE_SET specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_ADD specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_COPY specifies source field.
+ * @param[in] dcopy
+ * Destination field info for MLX5_MODIFICATION_TYPE_COPY in @type.
+ * Negative offset value sets the same offset as source offset.
+ * size field is ignored, value is taken from source field.
* @param[in,out] resource
* Pointer to the modify-header resource.
* @param[in] type
@@ -259,38 +309,68 @@ struct field_modify_info modify_tcp[] = {
static int
flow_dv_convert_modify_action(struct rte_flow_item *item,
struct field_modify_info *field,
+ struct field_modify_info *dcopy,
struct mlx5_flow_dv_modify_hdr_resource *resource,
- uint32_t type,
- struct rte_flow_error *error)
+ uint32_t type, struct rte_flow_error *error)
{
uint32_t i = resource->actions_num;
struct mlx5_modification_cmd *actions = resource->actions;
- const uint8_t *spec = item->spec;
- const uint8_t *mask = item->mask;
- uint32_t set;
-
- while (field->size) {
- set = 0;
- /* Generate modify command for each mask segment. */
- memcpy(&set, &mask[field->offset], field->size);
- if (set) {
- if (i >= MLX5_MODIFY_NUM)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "too many items to modify");
- actions[i].action_type = type;
- actions[i].field = field->id;
- actions[i].length = field->size ==
- 4 ? 0 : field->size * 8;
- rte_memcpy(&actions[i].data[4 - field->size],
- &spec[field->offset], field->size);
- actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- ++i;
+
+ /*
+ * The item and mask are provided in big-endian format.
+ * The fields should be presented as in big-endian format either.
+ * Mask must be always present, it defines the actual field width.
+ */
+ assert(item->mask);
+ assert(field->size);
+ do {
+ unsigned int size_b;
+ unsigned int off_b;
+ uint32_t mask;
+ uint32_t data;
+
+ if (i >= MLX5_MODIFY_NUM)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many items to modify");
+ /* Fetch variable byte size mask from the array. */
+ mask = flow_dv_fetch_field((const uint8_t *)item->mask +
+ field->offset, field->size);
+ if (!mask) {
+ ++field;
+ continue;
}
- if (resource->actions_num != i)
- resource->actions_num = i;
- field++;
- }
+ /* Deduce actual data width in bits from mask value. */
+ off_b = rte_bsf32(mask);
+ size_b = sizeof(uint32_t) * CHAR_BIT -
+ off_b - __builtin_clz(mask);
+ assert(size_b);
+ size_b = size_b == sizeof(uint32_t) * CHAR_BIT ? 0 : size_b;
+ actions[i].action_type = type;
+ actions[i].field = field->id;
+ actions[i].offset = off_b;
+ actions[i].length = size_b;
+ /* Convert entire record to expected big-endian format. */
+ actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
+ if (type == MLX5_MODIFICATION_TYPE_COPY) {
+ assert(dcopy);
+ actions[i].dst_field = dcopy->id;
+ actions[i].dst_offset =
+ (int)dcopy->offset < 0 ? off_b : dcopy->offset;
+ /* Convert entire record to big-endian format. */
+ actions[i].data1 = rte_cpu_to_be_32(actions[i].data1);
+ } else {
+ assert(item->spec);
+ data = flow_dv_fetch_field((const uint8_t *)item->spec +
+ field->offset, field->size);
+ /* Shift out the trailing masked bits from data. */
+ data = (data & mask) >> off_b;
+ actions[i].data1 = rte_cpu_to_be_32(data);
+ }
+ ++i;
+ ++field;
+ } while (field->size);
+ resource->actions_num = i;
if (!resource->actions_num)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -334,7 +414,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv4;
item.mask = &ipv4_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv4, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv4, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -380,7 +460,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv6;
item.mask = &ipv6_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv6, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv6, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -426,7 +506,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = ð
item.mask = ð_mask;
- return flow_dv_convert_modify_action(&item, modify_eth, resource,
+ return flow_dv_convert_modify_action(&item, modify_eth, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -540,7 +620,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &tcp_mask;
field = modify_tcp;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -600,7 +680,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -657,7 +737,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -702,7 +782,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -747,7 +827,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 96b9166..b9e53f5 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -383,11 +383,12 @@ struct mlx5_cqe {
/* CQE format value. */
#define MLX5_COMPRESSED 0x3
-/* Write a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_SET 1
-
-/* Add a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_ADD 2
+/* Action type of header modification. */
+enum {
+ MLX5_MODIFICATION_TYPE_SET = 0x1,
+ MLX5_MODIFICATION_TYPE_ADD = 0x2,
+ MLX5_MODIFICATION_TYPE_COPY = 0x3,
+};
/* The field of packet to be modified. */
enum mlx5_modification_field {
@@ -470,6 +471,13 @@ struct mlx5_modification_cmd {
union {
uint32_t data1;
uint8_t data[4];
+ struct {
+ unsigned int rsvd2:8;
+ unsigned int dst_offset:5;
+ unsigned int rsvd3:3;
+ unsigned int dst_field:12;
+ unsigned int rsvd4:4;
+ };
};
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 03/19] net/mlx5: add metadata register copy
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
` (15 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add flow metadata register copy action which is supported through modify
header command. As it is an internal action, not exposed to users, item
type (MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG) is negative value. This can be
used when creating PMD internal subflows.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 13 +++++++----
drivers/net/mlx5/mlx5_flow_dv.c | 50 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8cc6c47..170192d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -47,24 +47,30 @@ enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE,
};
-/* Private rte flow actions. */
+/* Private (internal) rte flow actions. */
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
+struct mlx5_flow_action_copy_mreg {
+ enum modify_reg dst;
+ enum modify_reg src;
+};
+
/* Matches on source queue. */
struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
@@ -227,7 +233,6 @@ struct mlx5_rte_flow_item_tx_queue {
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
-
#ifndef IPPROTO_MPLS
#define IPPROTO_MPLS 137
#endif
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6a3850a..baa34a2 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -863,7 +863,7 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
- const struct mlx5_rte_flow_action_set_tag *conf = (action->conf);
+ const struct mlx5_rte_flow_action_set_tag *conf = action->conf;
struct mlx5_modification_cmd *actions = resource->actions;
uint32_t i = resource->actions_num;
@@ -885,6 +885,47 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert internal COPY_REG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] res
+ * Pointer to the modify-header resource.
+ * @param[in] action
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+ struct mlx5_flow_dv_modify_hdr_resource *res,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_action_copy_mreg *conf = action->conf;
+ uint32_t mask = RTE_BE32(UINT32_MAX);
+ struct rte_flow_item item = {
+ .spec = NULL,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_src[] = {
+ {4, 0, reg_to_field[conf->src]},
+ {0, 0, 0},
+ };
+ struct field_modify_info reg_dst = {
+ .offset = (uint32_t)-1, /* Same as src. */
+ .id = reg_to_field[conf->dst],
+ };
+ return flow_dv_convert_modify_action(&item,
+ reg_src, ®_dst, res,
+ MLX5_MODIFICATION_TYPE_COPY,
+ error);
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -3951,6 +3992,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -5947,6 +5989,12 @@ struct field_modify_info modify_tcp[] = {
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
+ if (flow_dv_convert_action_copy_mreg(dev, &res,
+ actions, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 04/19] net/mlx5: refactor flow structure
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (2 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
` (14 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Some rte_flow fields which are local to subflows have been moved to
mlx5_flow structure. RSS attributes are grouped by mlx5_flow_rss structure.
tag_resource is moved to mlx5_flow_dv structure.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 18 +++++---
drivers/net/mlx5/mlx5_flow.h | 25 ++++++-----
drivers/net/mlx5/mlx5_flow_dv.c | 89 ++++++++++++++++++++------------------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 ++++++++++++-----------
4 files changed, 105 insertions(+), 82 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5408797..d1661f2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -612,7 +612,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
unsigned int i;
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -676,7 +676,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
assert(dev->data->dev_started);
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -2815,13 +2815,20 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
goto error_before_flow;
}
flow->drv_type = flow_get_drv_type(dev, attr);
- flow->ingress = attr->ingress;
- flow->transfer = attr->transfer;
if (hairpin_id != 0)
flow->hairpin_flow_id = hairpin_id;
assert(flow->drv_type > MLX5_FLOW_TYPE_MIN &&
flow->drv_type < MLX5_FLOW_TYPE_MAX);
- flow->queue = (void *)(flow + 1);
+ flow->rss.queue = (void *)(flow + 1);
+ if (rss) {
+ /*
+ * The following information is required by
+ * mlx5_flow_hashfields_adjust() in advance.
+ */
+ flow->rss.level = rss->level;
+ /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
+ flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
+ }
LIST_INIT(&flow->dev_flows);
if (rss && rss->types) {
unsigned int graph_root;
@@ -2861,6 +2868,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (!dev_flow)
goto error;
dev_flow->flow = flow;
+ dev_flow->external = 0;
LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
ret = flow_drv_translate(dev, dev_flow, &attr_tx,
items_tx.items,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 170192d..b9a9507 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -417,7 +417,6 @@ struct mlx5_flow_dv_push_vlan_action_resource {
/* DV flows structure. */
struct mlx5_flow_dv {
- uint64_t hash_fields; /**< Fields that participate in the hash. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queues. */
/* Flow DV api: */
struct mlx5_flow_dv_matcher *matcher; /**< Cache to matcher. */
@@ -436,6 +435,8 @@ struct mlx5_flow_dv {
/**< Structure for VF VLAN workaround. */
struct mlx5_flow_dv_push_vlan_action_resource *push_vlan_res;
/**< Pointer to push VLAN action resource in cache. */
+ struct mlx5_flow_dv_tag_resource *tag_resource;
+ /**< pointer to the tag action. */
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
void *actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS];
/**< Action list. */
@@ -460,11 +461,18 @@ struct mlx5_flow_verbs {
};
struct ibv_flow *flow; /**< Verbs flow pointer. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queue object. */
- uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
struct mlx5_vf_vlan vf_vlan;
/**< Structure for VF VLAN workaround. */
};
+struct mlx5_flow_rss {
+ uint32_t level;
+ uint32_t queue_num; /**< Number of entries in @p queue. */
+ uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
+ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+};
+
/** Device flow structure. */
struct mlx5_flow {
LIST_ENTRY(mlx5_flow) next;
@@ -473,6 +481,10 @@ struct mlx5_flow {
/**< Bit-fields of present layers, see MLX5_FLOW_LAYER_*. */
uint64_t actions;
/**< Bit-fields of detected actions, see MLX5_FLOW_ACTION_*. */
+ uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
+ uint8_t ingress; /**< 1 if the flow is ingress. */
+ uint32_t group; /**< The group index. */
+ uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
union {
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
struct mlx5_flow_dv dv;
@@ -486,18 +498,11 @@ struct mlx5_flow {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
+ struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
- struct mlx5_flow_dv_tag_resource *tag_resource;
- /**< pointer to the tag action. */
- struct rte_flow_action_rss rss;/**< RSS context. */
- uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
- uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
- uint8_t ingress; /**< 1 if the flow is ingress. */
- uint32_t group; /**< The group index. */
- uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index baa34a2..b7e8e0a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1585,10 +1585,9 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
struct mlx5_flow_dv_encap_decap_resource *cache_resource;
- struct rte_flow *flow = dev_flow->flow;
struct mlx5dv_dr_domain *domain;
- resource->flags = flow->group ? 0 : 1;
+ resource->flags = dev_flow->group ? 0 : 1;
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
domain = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX)
@@ -2747,7 +2746,7 @@ struct field_modify_info modify_tcp[] = {
else
ns = sh->rx_domain;
resource->flags =
- dev_flow->flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
+ dev_flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
/* Lookup a matching resource from cache. */
LIST_FOREACH(cache_resource, &sh->modify_cmds, next) {
if (resource->ft_type == cache_resource->ft_type &&
@@ -4068,18 +4067,20 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action actions[] __rte_unused,
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow);
+ struct mlx5_flow *dev_flow;
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
- return flow;
+ dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
#ifndef NDEBUG
@@ -5460,7 +5461,7 @@ struct field_modify_info modify_tcp[] = {
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
rte_atomic32_inc(&cache_resource->refcnt);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
return 0;
}
}
@@ -5482,7 +5483,7 @@ struct field_modify_info modify_tcp[] = {
rte_atomic32_init(&cache_resource->refcnt);
rte_atomic32_inc(&cache_resource->refcnt);
LIST_INSERT_HEAD(&sh->tags, cache_resource, next);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
DRV_LOG(DEBUG, "new tag resource %p: refcnt %d++",
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
@@ -5662,7 +5663,7 @@ struct field_modify_info modify_tcp[] = {
&table, error);
if (ret)
return ret;
- flow->group = table;
+ dev_flow->group = table;
if (attr->transfer)
res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
@@ -5699,47 +5700,50 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_FLAG:
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ assert(flow->rss.queue);
queue = actions->conf;
flow->rss.queue_num = 1;
- (*flow->queue)[0] = queue->index;
+ (*flow->rss.queue)[0] = queue->index;
action_flags |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
+ assert(flow->rss.queue);
rss = actions->conf;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance
+ * when expanding items for RSS.
+ */
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
@@ -5750,7 +5754,7 @@ struct field_modify_info modify_tcp[] = {
flow->counter = flow_dv_counter_alloc(dev,
count->shared,
count->id,
- flow->group);
+ dev_flow->group);
if (flow->counter == NULL)
goto cnt_err;
dev_flow->dv.actions[actions_n++] =
@@ -6048,9 +6052,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv4(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -6075,9 +6080,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv6(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -6102,7 +6108,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_tcp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
IBV_RX_HASH_SRC_PORT_TCP |
@@ -6114,7 +6120,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_udp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
IBV_RX_HASH_SRC_PORT_UDP |
@@ -6210,7 +6216,7 @@ struct field_modify_info modify_tcp[] = {
matcher.priority = mlx5_flow_adjust_priority(dev, priority,
matcher.priority);
matcher.egress = attr->egress;
- matcher.group = flow->group;
+ matcher.group = dev_flow->group;
matcher.transfer = attr->transfer;
if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
return -rte_errno;
@@ -6244,7 +6250,7 @@ struct field_modify_info modify_tcp[] = {
dv = &dev_flow->dv;
n = dv->actions_n;
if (dev_flow->actions & MLX5_FLOW_ACTION_DROP) {
- if (flow->transfer) {
+ if (dev_flow->transfer) {
dv->actions[n++] = priv->sh->esw_drop_action;
} else {
dv->hrxq = mlx5_hrxq_drop_new(dev);
@@ -6262,15 +6268,18 @@ struct field_modify_info modify_tcp[] = {
(MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS)) {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq) {
hrxq = mlx5_hrxq_new
- (dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields, (*flow->queue),
+ (dev, flow->rss.key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
@@ -6580,10 +6589,6 @@ struct field_modify_info modify_tcp[] = {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
}
- if (flow->tag_resource) {
- flow_dv_tag_release(dev, flow->tag_resource);
- flow->tag_resource = NULL;
- }
while (!LIST_EMPTY(&flow->dev_flows)) {
dev_flow = LIST_FIRST(&flow->dev_flows);
LIST_REMOVE(dev_flow, next);
@@ -6599,6 +6604,8 @@ struct field_modify_info modify_tcp[] = {
flow_dv_port_id_action_resource_release(dev_flow);
if (dev_flow->dv.push_vlan_res)
flow_dv_push_vlan_action_resource_release(dev_flow);
+ if (dev_flow->dv.tag_resource)
+ flow_dv_tag_release(dev, dev_flow->dv.tag_resource);
rte_free(dev_flow);
}
}
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fd27f6c..3ab73c2 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -864,8 +864,8 @@
const struct rte_flow_action_queue *queue = action->conf;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- (*flow->queue)[0] = queue->index;
+ if (flow->rss.queue)
+ (*flow->rss.queue)[0] = queue->index;
flow->rss.queue_num = 1;
}
@@ -889,16 +889,17 @@
const uint8_t *rss_key;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance when expanding
+ * items for RSS.
+ */
}
/**
@@ -1365,22 +1366,23 @@
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
+ struct mlx5_flow *dev_flow;
size += flow_verbs_get_actions_size(actions);
size += flow_verbs_get_items_size(items);
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->verbs.attr = (void *)(flow + 1);
- flow->verbs.specs =
- (uint8_t *)(flow + 1) + sizeof(struct ibv_flow_attr);
- return flow;
+ dev_flow->verbs.attr = (void *)(dev_flow + 1);
+ dev_flow->verbs.specs = (void *)(dev_flow->verbs.attr + 1);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
/**
@@ -1486,7 +1488,7 @@
flow_verbs_translate_item_ipv4(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -1498,7 +1500,7 @@
flow_verbs_translate_item_ipv6(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -1510,7 +1512,7 @@
flow_verbs_translate_item_tcp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
@@ -1522,7 +1524,7 @@
flow_verbs_translate_item_udp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
@@ -1667,16 +1669,17 @@
} else {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq)
- hrxq = mlx5_hrxq_new(dev, flow->key,
+ hrxq = mlx5_hrxq_new(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 05/19] net/mlx5: update flow functions
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (3 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
` (13 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Update flow creation/destroy functions for future reuse.
List operations can be skipped inside functions and done
separately out of flow creation.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d1661f2..6e6c845 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2736,7 +2736,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * no list insertion occurred, flow is just created,
+ * this is caller's responsibility to track the
+ * created flow.
* @param[in] attr
* Flow rule attributes.
* @param[in] items
@@ -2881,7 +2884,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
- TAILQ_INSERT_TAIL(list, flow, next);
+ if (list)
+ TAILQ_INSERT_TAIL(list, flow, next);
flow_rxq_flags_set(dev, flow);
return flow;
error_before_flow:
@@ -2975,7 +2979,8 @@ struct rte_flow *
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * there is no flow removal from the list.
* @param[in] flow
* Flow to destroy.
*/
@@ -2995,7 +3000,8 @@ struct rte_flow *
mlx5_flow_id_release(priv->sh->flow_id_pool,
flow->hairpin_flow_id);
flow_drv_destroy(dev, flow);
- TAILQ_REMOVE(list, flow, next);
+ if (list)
+ TAILQ_REMOVE(list, flow, next);
rte_free(flow->fdir);
rte_free(flow);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 06/19] net/mlx5: update meta register matcher set
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (4 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
` (12 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Introduce the dedicated matcher register field setup routine.
Update the code to use this unified one.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 171 +++++++++++++++++++---------------------
1 file changed, 82 insertions(+), 89 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b7e8e0a..170726f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4905,6 +4905,78 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add metadata register item to matcher
+ *
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] reg_type
+ * Type of device metadata register
+ * @param[in] value
+ * Register value
+ * @param[in] mask
+ * Register mask
+ */
+static void
+flow_dv_match_meta_reg(void *matcher, void *key,
+ enum modify_reg reg_type,
+ uint32_t data, uint32_t mask)
+{
+ void *misc2_m =
+ MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
+ void *misc2_v =
+ MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
+
+ data &= mask;
+ switch (reg_type) {
+ case REG_A:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data);
+ break;
+ case REG_B:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data);
+ break;
+ case REG_C_0:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, data);
+ break;
+ case REG_C_1:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data);
+ break;
+ case REG_C_2:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data);
+ break;
+ case REG_C_3:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data);
+ break;
+ case REG_C_4:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data);
+ break;
+ case REG_C_5:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data);
+ break;
+ case REG_C_6:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data);
+ break;
+ case REG_C_7:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data);
+ break;
+ default:
+ assert(false);
+ break;
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -4922,21 +4994,15 @@ struct field_modify_info modify_tcp[] = {
{
const struct rte_flow_item_meta *meta_m;
const struct rte_flow_item_meta *meta_v;
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
meta_m = (const void *)item->mask;
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v) {
- MLX5_SET(fte_match_set_misc2, misc2_m,
- metadata_reg_a, meta_m->data);
- MLX5_SET(fte_match_set_misc2, misc2_v,
- metadata_reg_a, meta_v->data & meta_m->data);
- }
+ if (meta_v)
+ flow_dv_match_meta_reg(matcher, key, REG_A,
+ rte_cpu_to_be_32(meta_v->data),
+ rte_cpu_to_be_32(meta_m->data));
}
/**
@@ -4953,13 +5019,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_meta_vport(void *matcher, void *key,
uint32_t value, uint32_t mask)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
-
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, value);
+ flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask);
}
/**
@@ -4973,81 +5033,14 @@ struct field_modify_info modify_tcp[] = {
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_tag(void *matcher, void *key,
- const struct rte_flow_item *item)
+flow_dv_translate_mlx5_item_tag(void *matcher, void *key,
+ const struct rte_flow_item *item)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
enum modify_reg reg = tag_v->id;
- rte_be32_t value = tag_v->data;
- rte_be32_t mask = tag_m->data;
- switch (reg) {
- case REG_A:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a,
- rte_be_to_cpu_32(value));
- break;
- case REG_B:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_0:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_1:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_2:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_3:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_4:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_5:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_6:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_7:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7,
- rte_be_to_cpu_32(value));
- break;
- }
+ flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
/**
@@ -6179,8 +6172,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
- flow_dv_translate_item_tag(match_mask, match_value,
- items);
+ flow_dv_translate_mlx5_item_tag(match_mask,
+ match_value, items);
last_item = MLX5_FLOW_ITEM_TAG;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 07/19] net/mlx5: rename structure and function
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (5 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
` (11 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
There are some renaming:
- in the DV flow engine overall: flow_d_* -> flow_dv_*
- in flow_dv_translate(): res -> mhdr_res
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 151 ++++++++++++++++++++--------------------
1 file changed, 76 insertions(+), 75 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 170726f..9b2eba5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -183,7 +183,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to the rte_eth_dev structure.
*/
static void
-flow_d_shared_lock(struct rte_eth_dev *dev)
+flow_dv_shared_lock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -198,7 +198,7 @@ struct field_modify_info modify_tcp[] = {
}
static void
-flow_d_shared_unlock(struct rte_eth_dev *dev)
+flow_dv_shared_unlock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -5599,7 +5599,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Fill the flow with DV spec.
+ * Fill the flow with DV spec, lock free
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to rte_eth_dev structure.
@@ -5618,12 +5619,12 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+__flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow *flow = dev_flow->flow;
@@ -5638,7 +5639,7 @@ struct field_modify_info modify_tcp[] = {
};
int actions_n = 0;
bool actions_end = false;
- struct mlx5_flow_dv_modify_hdr_resource res = {
+ struct mlx5_flow_dv_modify_hdr_resource mhdr_res = {
.ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX
};
@@ -5658,7 +5659,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
dev_flow->group = table;
if (attr->transfer)
- res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
+ mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; !actions_end ; actions++) {
@@ -5807,7 +5808,7 @@ struct field_modify_info modify_tcp[] = {
mlx5_update_vlan_vid_pcp(actions, &vlan);
/* If no VLAN push - this is a modify header action */
if (flow_dv_convert_action_modify_vlan_vid
- (&res, actions, error))
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
break;
@@ -5906,8 +5907,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
- if (flow_dv_convert_action_modify_mac(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_mac
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ?
@@ -5916,8 +5917,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
- if (flow_dv_convert_action_modify_ipv4(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv4
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ?
@@ -5926,8 +5927,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
- if (flow_dv_convert_action_modify_ipv6(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv6
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ?
@@ -5936,9 +5937,9 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
- if (flow_dv_convert_action_modify_tp(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_tp
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_TP_SRC ?
@@ -5946,23 +5947,22 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_SET_TP_DST;
break;
case RTE_FLOW_ACTION_TYPE_DEC_TTL:
- if (flow_dv_convert_action_modify_dec_ttl(&res, items,
- &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_dec_ttl
+ (&mhdr_res, items, &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_DEC_TTL;
break;
case RTE_FLOW_ACTION_TYPE_SET_TTL:
- if (flow_dv_convert_action_modify_ttl(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_ttl
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TTL;
break;
case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ:
- if (flow_dv_convert_action_modify_tcp_seq(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_seq
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ?
@@ -5972,8 +5972,8 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK:
- if (flow_dv_convert_action_modify_tcp_ack(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_ack
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ?
@@ -5981,14 +5981,14 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
- if (flow_dv_convert_action_set_reg(&res, actions,
- error))
+ if (flow_dv_convert_action_set_reg
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
- if (flow_dv_convert_action_copy_mreg(dev, &res,
- actions, error))
+ if (flow_dv_convert_action_copy_mreg
+ (dev, &mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
@@ -5997,9 +5997,7 @@ struct field_modify_info modify_tcp[] = {
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
- (dev, &res,
- dev_flow,
- error))
+ (dev, &mhdr_res, dev_flow, error))
return -rte_errno;
dev_flow->dv.actions[modify_action_position] =
dev_flow->dv.modify_hdr->verbs_action;
@@ -6217,7 +6215,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Apply the flow to the NIC.
+ * Apply the flow to the NIC, lock free,
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6230,8 +6229,8 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
- struct rte_flow_error *error)
+__flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6529,6 +6528,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC but keeps it in memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to Ethernet device.
@@ -6536,7 +6536,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6564,6 +6564,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC and the memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6571,13 +6572,13 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow *dev_flow;
if (!flow)
return;
- flow_dv_remove(dev, flow);
+ __flow_dv_remove(dev, flow);
if (flow->counter) {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
@@ -6688,69 +6689,69 @@ struct field_modify_info modify_tcp[] = {
}
/*
- * Mutex-protected thunk to flow_dv_translate().
+ * Mutex-protected thunk to lock-free __flow_dv_translate().
*/
static int
-flow_d_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_translate(dev, dev_flow, attr, items, actions, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_translate(dev, dev_flow, attr, items, actions, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_apply().
+ * Mutex-protected thunk to lock-free __flow_dv_apply().
*/
static int
-flow_d_apply(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
+flow_dv_apply(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_apply(dev, flow, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_apply(dev, flow, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_remove().
+ * Mutex-protected thunk to lock-free __flow_dv_remove().
*/
static void
-flow_d_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_remove(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_remove(dev, flow);
+ flow_dv_shared_unlock(dev);
}
/*
- * Mutex-protected thunk to flow_dv_destroy().
+ * Mutex-protected thunk to lock-free __flow_dv_destroy().
*/
static void
-flow_d_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_destroy(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_destroy(dev, flow);
+ flow_dv_shared_unlock(dev);
}
const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.validate = flow_dv_validate,
.prepare = flow_dv_prepare,
- .translate = flow_d_translate,
- .apply = flow_d_apply,
- .remove = flow_d_remove,
- .destroy = flow_d_destroy,
+ .translate = flow_dv_translate,
+ .apply = flow_dv_apply,
+ .remove = flow_dv_remove,
+ .destroy = flow_dv_destroy,
.query = flow_dv_query,
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 08/19] net/mlx5: check metadata registers availability
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (6 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
` (10 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The metadata registers reg_c provide support for TAG and
SET_TAG features. Although there are 8 registers are available
on the current mlx5 devices, some of them can be reserved.
The availability should be queried by iterative trial-and-error
implemented by mlx5_flow_discover_mreg_c() routine.
If reg_c is available, it can be regarded inclusively that
the extensive metadata support is possible. E.g. metadata
register copy action, supporting 16 modify header actions
(instead of 8 by default) preserving register across
different domains (FDB and NIC) and so on.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 11 +++++
drivers/net/mlx5/mlx5.h | 11 ++++-
drivers/net/mlx5/mlx5_ethdev.c | 8 +++-
drivers/net/mlx5/mlx5_flow.c | 98 +++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow.h | 13 ------
drivers/net/mlx5/mlx5_flow_dv.c | 9 ++--
drivers/net/mlx5/mlx5_prm.h | 18 ++++++++
7 files changed, 148 insertions(+), 20 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 72c30bf..1b86b7b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2341,6 +2341,17 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
+ if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
+ DRV_LOG(DEBUG,
+ "port %u extensive metadata register is not supported",
+ eth_dev->data->port_id);
+ }
return eth_dev;
error:
if (priv) {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..6b82c6d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -37,6 +37,7 @@
#include "mlx5_autoconf.h"
#include "mlx5_defs.h"
#include "mlx5_glue.h"
+#include "mlx5_prm.h"
enum {
PCI_VENDOR_ID_MELLANOX = 0x15b3,
@@ -252,6 +253,8 @@ struct mlx5_dev_config {
} mprq; /* Configurations for Multi-Packet RQ. */
int mps; /* Multi-packet send supported mode. */
unsigned int flow_prio; /* Number of flow priorities. */
+ enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM];
+ /* Availibility of mreg_c's. */
unsigned int tso_max_payload_sz; /* Maximum TCP payload for TSO. */
unsigned int ind_table_max_size; /* Maximum indirection table size. */
unsigned int max_dump_files_num; /* Maximum dump files per queue. */
@@ -561,6 +564,10 @@ struct mlx5_flow_tbl_resource {
#define MLX5_MAX_TABLES UINT16_MAX
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
+/* Reserve the last two tables for metadata register copy. */
+#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
+ (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -786,7 +793,7 @@ int mlx5_dev_to_pci_addr(const char *dev_path,
int mlx5_is_removed(struct rte_eth_dev *dev);
eth_tx_burst_t mlx5_select_tx_function(struct rte_eth_dev *dev);
eth_rx_burst_t mlx5_select_rx_function(struct rte_eth_dev *dev);
-struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port);
+struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port, bool valid);
struct mlx5_priv *mlx5_dev_to_eswitch_info(struct rte_eth_dev *dev);
int mlx5_sysfs_switch_info(unsigned int ifindex,
struct mlx5_switch_info *info);
@@ -866,6 +873,8 @@ int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
/* mlx5_flow.c */
+int mlx5_flow_discover_mreg_c(struct rte_eth_dev *eth_dev);
+bool mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev);
int mlx5_flow_discover_priorities(struct rte_eth_dev *dev);
void mlx5_flow_print(struct rte_flow *flow);
int mlx5_flow_validate(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..2b7c867 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -1793,6 +1793,10 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
*
* @param[in] port
* Device port id.
+ * @param[in] valid
+ * Device port id is valid, skip check. This flag is useful
+ * when trials are performed from probing and device is not
+ * flagged as valid yet (in attaching process).
* @param[out] es_domain_id
* E-Switch domain id.
* @param[out] es_port_id
@@ -1803,7 +1807,7 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
* on success, NULL otherwise and rte_errno is set.
*/
struct mlx5_priv *
-mlx5_port_to_eswitch_info(uint16_t port)
+mlx5_port_to_eswitch_info(uint16_t port, bool valid)
{
struct rte_eth_dev *dev;
struct mlx5_priv *priv;
@@ -1812,7 +1816,7 @@ struct mlx5_priv *
rte_errno = EINVAL;
return NULL;
}
- if (!rte_eth_dev_is_valid_port(port)) {
+ if (!valid && !rte_eth_dev_is_valid_port(port)) {
rte_errno = ENODEV;
return NULL;
}
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6e6c845..f32ea8d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -368,6 +368,33 @@ static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
NULL, "invalid feature name");
}
+
+/**
+ * Check extensive flow metadata register support.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * True if device supports extensive flow metadata register, otherwise false.
+ */
+bool
+mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
+ /*
+ * Having available reg_c can be regarded inclusively as supporting
+ * extensive flow metadata register, which could mean,
+ * - metadata register copy action by modify header.
+ * - 16 modify header actions is supported.
+ * - reg_c's are preserved across different domain (FDB and NIC) on
+ * packet loopback by flow lookup miss.
+ */
+ return config->flow_mreg_c[2] != REG_NONE;
+}
+
/**
* Discover the maximum number of priority available.
*
@@ -4033,3 +4060,74 @@ struct rte_flow *
}
return 0;
}
+
+/**
+ * Discover availability of metadata reg_c's.
+ *
+ * Iteratively use test flows to check availability.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ enum modify_reg idx;
+ int n = 0;
+
+ /* reg_c[0] and reg_c[1] are reserved. */
+ config->flow_mreg_c[n++] = REG_C_0;
+ config->flow_mreg_c[n++] = REG_C_1;
+ /* Discover availability of other reg_c's. */
+ for (idx = REG_C_2; idx <= REG_C_7; ++idx) {
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .priority = MLX5_FLOW_PRIO_RSVD,
+ .ingress = 1,
+ };
+ struct rte_flow_item items[] = {
+ [0] = {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action actions[] = {
+ [0] = {
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &(struct mlx5_flow_action_copy_mreg){
+ .src = REG_C_1,
+ .dst = idx,
+ },
+ },
+ [1] = {
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &(struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ },
+ },
+ [2] = {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ struct rte_flow *flow;
+ struct rte_flow_error error;
+
+ if (!config->dv_flow_en)
+ break;
+ /* Create internal flow, validation skips copy action. */
+ flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, &error);
+ if (!flow)
+ continue;
+ if (dev->data->dev_started || !flow_drv_apply(dev, flow, NULL))
+ config->flow_mreg_c[n++] = idx;
+ flow_list_destroy(dev, NULL, flow);
+ }
+ for (; n < MLX5_MREG_C_NUM; ++n)
+ config->flow_mreg_c[n] = REG_NONE;
+ return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b9a9507..f2b6726 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -27,19 +27,6 @@
#include "mlx5.h"
#include "mlx5_prm.h"
-enum modify_reg {
- REG_A,
- REG_B,
- REG_C_0,
- REG_C_1,
- REG_C_2,
- REG_C_3,
- REG_C_4,
- REG_C_5,
- REG_C_6,
- REG_C_7,
-};
-
/* Private rte flow items. */
enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 9b2eba5..da3589f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -832,6 +832,7 @@ struct field_modify_info modify_tcp[] = {
}
static enum mlx5_modification_field reg_to_field[] = {
+ [REG_NONE] = MLX5_MODI_OUT_NONE,
[REG_A] = MLX5_MODI_META_DATA_REG_A,
[REG_B] = MLX5_MODI_META_DATA_REG_B,
[REG_C_0] = MLX5_MODI_META_REG_C_0,
@@ -1040,7 +1041,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
if (!spec)
return 0;
- esw_priv = mlx5_port_to_eswitch_info(spec->id);
+ esw_priv = mlx5_port_to_eswitch_info(spec->id, false);
if (!esw_priv)
return rte_flow_error_set(error, rte_errno,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, spec,
@@ -2697,7 +2698,7 @@ struct field_modify_info modify_tcp[] = {
"failed to obtain E-Switch info");
port_id = action->conf;
port = port_id->original ? dev->data->port_id : port_id->id;
- act_priv = mlx5_port_to_eswitch_info(port);
+ act_priv = mlx5_port_to_eswitch_info(port, false);
if (!act_priv)
return rte_flow_error_set
(error, rte_errno,
@@ -5092,7 +5093,7 @@ struct field_modify_info modify_tcp[] = {
mask = pid_m ? pid_m->id : 0xffff;
id = pid_v ? pid_v->id : dev->data->port_id;
- priv = mlx5_port_to_eswitch_info(id);
+ priv = mlx5_port_to_eswitch_info(id, item == NULL);
if (!priv)
return -rte_errno;
/* Translate to vport field or to metadata, depending on mode. */
@@ -5540,7 +5541,7 @@ struct field_modify_info modify_tcp[] = {
(const struct rte_flow_action_port_id *)action->conf;
port = conf->original ? dev->data->port_id : conf->id;
- priv = mlx5_port_to_eswitch_info(port);
+ priv = mlx5_port_to_eswitch_info(port, false);
if (!priv)
return rte_flow_error_set(error, -rte_errno,
RTE_FLOW_ERROR_TYPE_ACTION,
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b9e53f5..c17ba66 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -392,6 +392,7 @@ enum {
/* The field of packet to be modified. */
enum mlx5_modification_field {
+ MLX5_MODI_OUT_NONE = -1,
MLX5_MODI_OUT_SMAC_47_16 = 1,
MLX5_MODI_OUT_SMAC_15_0,
MLX5_MODI_OUT_ETHERTYPE,
@@ -455,6 +456,23 @@ enum mlx5_modification_field {
MLX5_MODI_IN_TCP_ACK_NUM = 0x5C,
};
+/* Total number of metadata reg_c's. */
+#define MLX5_MREG_C_NUM (MLX5_MODI_META_REG_C_7 - MLX5_MODI_META_REG_C_0 + 1)
+
+enum modify_reg {
+ REG_NONE = 0,
+ REG_A,
+ REG_B,
+ REG_C_0,
+ REG_C_1,
+ REG_C_2,
+ REG_C_3,
+ REG_C_4,
+ REG_C_5,
+ REG_C_6,
+ REG_C_7,
+};
+
/* Modification sub command. */
struct mlx5_modification_cmd {
union {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 09/19] net/mlx5: add devarg for extensive metadata support
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (7 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
` (9 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/mlx5.rst | 49 ++++++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.c | 33 +++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_defs.h | 4 ++++
drivers/net/mlx5/mlx5_prm.h | 3 +++
5 files changed, 90 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..0ccc1c8 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -578,6 +578,55 @@ Run-time configuration
Disabled by default.
+- ``dv_xmeta_en`` parameter [int]
+
+ A nonzero value enables extensive flow metadata support if device is
+ capable and driver supports it. This can enable extensive support of
+ ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
+ ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
+
+ There are some possible configurations, depending on parameter value:
+
+ - 0, this is default value, defines the legacy mode, the ``MARK`` and
+ ``META`` related actions and items operate only within NIC Tx and
+ NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
+ the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
+ item is 32 bits wide and match supported on egress only.
+
+ - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
+ compatibility, the actual width of data transferred to the FDB domain
+ depends on kernel configuration and may be vary. The actual supported
+ width can be retrieved in runtime by series of rte_flow_validate()
+ trials.
+
+ - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 24 bits. The actual supported width can be retrieved in runtime by
+ series of rte_flow_validate() trials.
+
+ +------+-----------+-----------+-------------+-------------+
+ | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
+ +======+===========+===========+=============+=============+
+ | 0 | 24 bits | 32 bits | 32 bits | no |
+ +------+-----------+-----------+-------------+-------------+
+ | 1 | 24 bits | vary 0-32 | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+ | 2 | vary 0-32 | 32 bits | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+
+ If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
+ ignored and the device is configured to operate in legacy mode (0).
+
+ Disabled by default (set to 0).
+
- ``dv_flow_en`` parameter [int]
A nonzero value enables the DV flow steering assuming it is supported
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1b86b7b..943d0e8 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -125,6 +125,9 @@
/* Activate DV flow steering. */
#define MLX5_DV_FLOW_EN "dv_flow_en"
+/* Enable extensive flow metadata support. */
+#define MLX5_DV_XMETA_EN "dv_xmeta_en"
+
/* Activate Netlink support in VF mode. */
#define MLX5_VF_NL_EN "vf_nl_en"
@@ -1310,6 +1313,16 @@ struct mlx5_flow_id_pool *
config->dv_esw_en = !!tmp;
} else if (strcmp(MLX5_DV_FLOW_EN, key) == 0) {
config->dv_flow_en = !!tmp;
+ } else if (strcmp(MLX5_DV_XMETA_EN, key) == 0) {
+ if (tmp != MLX5_XMETA_MODE_LEGACY &&
+ tmp != MLX5_XMETA_MODE_META16 &&
+ tmp != MLX5_XMETA_MODE_META32) {
+ DRV_LOG(WARNING, "invalid extensive "
+ "metadata parameter");
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ config->dv_xmeta_en = tmp;
} else if (strcmp(MLX5_MR_EXT_MEMSEG_EN, key) == 0) {
config->mr_ext_memseg_en = !!tmp;
} else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) {
@@ -1361,6 +1374,7 @@ struct mlx5_flow_id_pool *
MLX5_VF_NL_EN,
MLX5_DV_ESW_EN,
MLX5_DV_FLOW_EN,
+ MLX5_DV_XMETA_EN,
MLX5_MR_EXT_MEMSEG_EN,
MLX5_REPRESENTOR,
MLX5_MAX_DUMP_FILES_NUM,
@@ -1734,6 +1748,12 @@ struct mlx5_flow_id_pool *
rte_errno = EINVAL;
return rte_errno;
}
+ if (sh_conf->dv_xmeta_en ^ config->dv_xmeta_en) {
+ DRV_LOG(ERR, "\"dv_xmeta_en\" configuration mismatch"
+ " for shared %s context", sh->ibdev_name);
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
return 0;
}
/**
@@ -2347,10 +2367,23 @@ struct mlx5_flow_id_pool *
err = -err;
goto error;
}
+ if (!priv->config.dv_esw_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(WARNING, "metadata mode %u is not supported "
+ "(no E-Switch)", priv->config.dv_xmeta_en);
+ priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
eth_dev->data->port_id);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata registers available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
}
return eth_dev;
error:
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6b82c6d..e59f8f6 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -238,6 +238,7 @@ struct mlx5_dev_config {
unsigned int vf_nl_en:1; /* Enable Netlink requests in VF mode. */
unsigned int dv_esw_en:1; /* Enable E-Switch DV flow. */
unsigned int dv_flow_en:1; /* Enable DV flow. */
+ unsigned int dv_xmeta_en:2; /* Enable extensive flow metadata. */
unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */
unsigned int devx:1; /* Whether devx interface is available or not. */
unsigned int dest_tir:1; /* Whether advanced DR API is available. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index e36ab55..a77c430 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -141,6 +141,10 @@
/* Cache size of mempool for Multi-Packet RQ. */
#define MLX5_MPRQ_MP_CACHE_SZ 32U
+#define MLX5_XMETA_MODE_LEGACY 0
+#define MLX5_XMETA_MODE_META16 1
+#define MLX5_XMETA_MODE_META32 2
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index c17ba66..b405cb6 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -226,6 +226,9 @@
/* Default mark value used when none is provided. */
#define MLX5_FLOW_MARK_DEFAULT 0xffffff
+/* Default mark mask for metadata legacy mode. */
+#define MLX5_FLOW_MARK_MASK 0xffffff
+
/* Maximum number of DS in WQE. Limited by 6-bit field. */
#define MLX5_DSEG_MAX 63
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 10/19] net/mlx5: adjust shared register according to mask
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (8 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
` (8 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The metadata register reg_c[0] might be used by kernel or
firmware for their internal purposes. The actual used mask
can be queried from the kernel. The remaining bits can be
used by PMD to provide META or MARK feature. The code queries
the mask of reg_c[0] and adjust the resource usage dynamically.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 95 +++++++++++++++++++++++++++++++++++------
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_flow_dv.c | 41 ++++++++++++++++--
3 files changed, 122 insertions(+), 17 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 943d0e8..fb7b94b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1584,6 +1584,60 @@ struct mlx5_flow_id_pool *
}
/**
+ * Configures the metadata mask fields in the shared context.
+ *
+ * @param [in] dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_set_metadata_mask(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ibv_shared *sh = priv->sh;
+ uint32_t meta, mark, reg_c0;
+
+ reg_c0 = ~priv->vport_meta_mask;
+ switch (priv->config.dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ meta = UINT32_MAX;
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META16:
+ meta = reg_c0 >> rte_bsf32(reg_c0);
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META32:
+ meta = UINT32_MAX;
+ mark = (reg_c0 >> rte_bsf32(reg_c0)) & MLX5_FLOW_MARK_MASK;
+ break;
+ default:
+ meta = 0;
+ mark = 0;
+ assert(false);
+ break;
+ }
+ if (sh->dv_mark_mask && sh->dv_mark_mask != mark)
+ DRV_LOG(WARNING, "metadata MARK mask mismatche %08X:%08X",
+ sh->dv_mark_mask, mark);
+ else
+ sh->dv_mark_mask = mark;
+ if (sh->dv_meta_mask && sh->dv_meta_mask != meta)
+ DRV_LOG(WARNING, "metadata META mask mismatche %08X:%08X",
+ sh->dv_meta_mask, meta);
+ else
+ sh->dv_meta_mask = meta;
+ if (sh->dv_regc0_mask && sh->dv_regc0_mask != reg_c0)
+ DRV_LOG(WARNING, "metadata reg_c0 mask mismatche %08X:%08X",
+ sh->dv_meta_mask, reg_c0);
+ else
+ sh->dv_regc0_mask = reg_c0;
+ DRV_LOG(DEBUG, "metadata mode %u", priv->config.dv_xmeta_en);
+ DRV_LOG(DEBUG, "metadata MARK mask %08X", sh->dv_mark_mask);
+ DRV_LOG(DEBUG, "metadata META mask %08X", sh->dv_meta_mask);
+ DRV_LOG(DEBUG, "metadata reg_c0 mask %08X", sh->dv_regc0_mask);
+}
+
+/**
* Allocate page of door-bells and register it using DevX API.
*
* @param [in] dev
@@ -1803,7 +1857,7 @@ struct mlx5_flow_id_pool *
uint16_t port_id;
unsigned int i;
#ifdef HAVE_MLX5DV_DR_DEVX_PORT
- struct mlx5dv_devx_port devx_port;
+ struct mlx5dv_devx_port devx_port = { .comp_mask = 0 };
#endif
/* Determine if this port representor is supposed to be spawned. */
@@ -2035,13 +2089,17 @@ struct mlx5_flow_id_pool *
* vport index. The engaged part of metadata register is
* defined by mask.
*/
- devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
- MLX5DV_DEVX_PORT_MATCH_REG_C_0;
- err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port, &devx_port);
- if (err) {
- DRV_LOG(WARNING, "can't query devx port %d on device %s",
- spawn->ibv_port, spawn->ibv_dev->name);
- devx_port.comp_mask = 0;
+ if (switch_info->representor || switch_info->master) {
+ devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
+ MLX5DV_DEVX_PORT_MATCH_REG_C_0;
+ err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port,
+ &devx_port);
+ if (err) {
+ DRV_LOG(WARNING,
+ "can't query devx port %d on device %s",
+ spawn->ibv_port, spawn->ibv_dev->name);
+ devx_port.comp_mask = 0;
+ }
}
if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) {
priv->vport_meta_tag = devx_port.reg_c_0.value;
@@ -2361,18 +2419,27 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
- /* Query availibility of metadata reg_c's. */
- err = mlx5_flow_discover_mreg_c(eth_dev);
- if (err < 0) {
- err = -err;
- goto error;
- }
if (!priv->config.dv_esw_en &&
priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
DRV_LOG(WARNING, "metadata mode %u is not supported "
"(no E-Switch)", priv->config.dv_xmeta_en);
priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
}
+ mlx5_set_metadata_mask(eth_dev);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ !priv->sh->dv_regc0_mask) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata reg_c[0] is available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e59f8f6..92d445a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -622,6 +622,9 @@ struct mlx5_ibv_shared {
} mr;
/* Shared DV/DR flow data section. */
pthread_mutex_t dv_mutex; /* DV context mutex. */
+ uint32_t dv_meta_mask; /* flow META metadata supported mask. */
+ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */
+ uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */
uint32_t dv_refcnt; /* DV/DR data reference counter. */
void *fdb_domain; /* FDB Direct Rules name space handle. */
struct mlx5_flow_tbl_resource fdb_tbl[MLX5_MAX_TABLES_FDB];
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index da3589f..fb56329 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -901,13 +901,13 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev,
struct mlx5_flow_dv_modify_hdr_resource *res,
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
const struct mlx5_flow_action_copy_mreg *conf = action->conf;
- uint32_t mask = RTE_BE32(UINT32_MAX);
+ rte_be32_t mask = RTE_BE32(UINT32_MAX);
struct rte_flow_item item = {
.spec = NULL,
.mask = &mask,
@@ -917,9 +917,44 @@ struct field_modify_info modify_tcp[] = {
{0, 0, 0},
};
struct field_modify_info reg_dst = {
- .offset = (uint32_t)-1, /* Same as src. */
+ .offset = 0,
.id = reg_to_field[conf->dst],
};
+ /* Adjust reg_c[0] usage according to reported mask. */
+ if (conf->dst == REG_C_0 || conf->src == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t reg_c0 = priv->sh->dv_regc0_mask;
+
+ assert(reg_c0);
+ assert(priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY);
+ if (conf->dst == REG_C_0) {
+ /* Copy to reg_c[0], within mask only. */
+ reg_dst.offset = rte_bsf32(reg_c0);
+ /*
+ * Mask is ignoring the enianness, because
+ * there is no conversion in datapath.
+ */
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ /* Copy from destination lower bits to reg_c[0]. */
+ mask = reg_c0 >> reg_dst.offset;
+#else
+ /* Copy from destination upper bits to reg_c[0]. */
+ mask = reg_c0 << (sizeof(reg_c0) * CHAR_BIT -
+ rte_fls_u32(reg_c0));
+#endif
+ } else {
+ mask = rte_cpu_to_be_32(reg_c0);
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ /* Copy from reg_c[0] to destination lower bits. */
+ reg_dst.offset = 0;
+#else
+ /* Copy from reg_c[0] to destination upper bits. */
+ reg_dst.offset = sizeof(reg_c0) * CHAR_BIT -
+ (rte_fls_u32(reg_c0) -
+ rte_bsf32(reg_c0));
+#endif
+ }
+ }
return flow_dv_convert_modify_action(&item,
reg_src, ®_dst, res,
MLX5_MODIFICATION_TYPE_COPY,
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 11/19] net/mlx5: check the maximal modify actions number
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (9 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
` (7 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
If the extensive metadata registers are supported,
it can be regarded inclusively that the extensive
metadata support is possible. E.g. metadata register
copy action, supporting 16 modify header actions,
reserving register across different steering domain
(FDB and NIC) and so on.
This patch handles the maximal amount of header modify
actions depending on discovered metadata registers
support.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 9 +++++++--
drivers/net/mlx5/mlx5_flow_dv.c | 25 +++++++++++++++++++++++++
2 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f2b6726..c1d0a65 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -348,8 +348,13 @@ struct mlx5_flow_dv_tag_resource {
uint32_t tag; /**< the tag value. */
};
-/* Number of modification commands. */
-#define MLX5_MODIFY_NUM 8
+/*
+ * Number of modification commands.
+ * If extensive metadata registers are supported
+ * the maximal actions amount is 16 and 8 otherwise.
+ */
+#define MLX5_MODIFY_NUM 16
+#define MLX5_MODIFY_NUM_NO_MREG 8
/* Modify resource structure */
struct mlx5_flow_dv_modify_hdr_resource {
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index fb56329..80280ab 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2749,6 +2749,27 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get the maximum number of modify header actions.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * Max number of modify header actions device can support.
+ */
+static unsigned int
+flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev)
+{
+ /*
+ * There's no way to directly query the max cap. Although it has to be
+ * acquried by iterative trial, it is a safe assumption that more
+ * actions are supported by FW if extensive metadata register is
+ * supported.
+ */
+ return mlx5_flow_ext_mreg_supported(dev) ? MLX5_MODIFY_NUM :
+ MLX5_MODIFY_NUM_NO_MREG;
+}
+/**
* Find existing modify-header resource or create and register a new one.
*
* @param dev[in, out]
@@ -2775,6 +2796,10 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_flow_dv_modify_hdr_resource *cache_resource;
struct mlx5dv_dr_domain *ns;
+ if (resource->actions_num > flow_dv_modify_hdr_action_max(dev))
+ return rte_flow_error_set(error, EOVERFLOW,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many modify header items");
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
ns = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 12/19] net/mlx5: update metadata register id query
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (10 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
` (6 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The NIC might support up to 8 extensive metadata registers.
These registers are supposed to be used by multiple features.
There is register id query routine to allow determine which
register is actually used by specified feature.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 88 +++++++++++++++++++++++++++++---------------
drivers/net/mlx5/mlx5_flow.h | 17 +++++++++
2 files changed, 75 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f32ea8d..b87657a 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -316,12 +316,6 @@ struct mlx5_flow_tunnel_info {
},
};
-enum mlx5_feature_name {
- MLX5_HAIRPIN_RX,
- MLX5_HAIRPIN_TX,
- MLX5_APPLICATION,
-};
-
/**
* Translate tag ID to register.
*
@@ -338,37 +332,70 @@ enum mlx5_feature_name {
* The request register on success, a negative errno
* value otherwise and rte_errno is set.
*/
-__rte_unused
-static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
- enum mlx5_feature_name feature,
- uint32_t id,
- struct rte_flow_error *error)
+enum modify_reg
+mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error)
{
- static enum modify_reg id2reg[] = {
- [0] = REG_A,
- [1] = REG_C_2,
- [2] = REG_C_3,
- [3] = REG_C_4,
- [4] = REG_B,};
-
- dev = (void *)dev;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
switch (feature) {
case MLX5_HAIRPIN_RX:
return REG_B;
case MLX5_HAIRPIN_TX:
return REG_A;
- case MLX5_APPLICATION:
- if (id > 4)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "invalid tag id");
- return id2reg[id];
+ case MLX5_METADATA_RX:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_B;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_0;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_1;
+ }
+ break;
+ case MLX5_METADATA_TX:
+ return REG_A;
+ case MLX5_METADATA_FDB:
+ return REG_C_0;
+ case MLX5_FLOW_MARK:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_NONE;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_1;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_0;
+ }
+ break;
+ case MLX5_COPY_MARK:
+ return REG_C_3;
+ case MLX5_APP_TAG:
+ /*
+ * Suppose engaging reg_c_2 .. reg_c_7 registers.
+ * reg_c_2 is reserved for coloring by meters.
+ * reg_c_3 is reserved for split flows TAG.
+ */
+ if (id > (REG_C_7 - REG_C_4))
+ return rte_flow_error_set
+ (error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "invalid tag id");
+ if (config->flow_mreg_c[id + REG_C_4 - REG_C_0] == REG_NONE)
+ return rte_flow_error_set
+ (error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "unsupported tag id");
+ return config->flow_mreg_c[id + REG_C_4 - REG_C_0];
}
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+ assert(false);
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "invalid feature name");
}
-
/**
* Check extensive flow metadata register support.
*
@@ -2667,7 +2694,6 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
struct mlx5_rte_flow_item_tag *tag_item;
struct rte_flow_item *item;
char *addr;
- struct rte_flow_error error;
int encap = 0;
mlx5_flow_id_get(priv->sh->flow_id_pool, flow_id);
@@ -2733,7 +2759,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
rte_memcpy(actions_rx, actions, sizeof(struct rte_flow_action));
actions_rx++;
set_tag = (void *)actions_rx;
- set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
+ set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
@@ -2743,7 +2770,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
tag_item->data = *flow_id;
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ tag_item->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c1d0a65..9371e11 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -63,6 +63,18 @@ struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
};
+/* Feature name to allocate metadata register. */
+enum mlx5_feature_name {
+ MLX5_HAIRPIN_RX,
+ MLX5_HAIRPIN_TX,
+ MLX5_METADATA_RX,
+ MLX5_METADATA_TX,
+ MLX5_METADATA_FDB,
+ MLX5_FLOW_MARK,
+ MLX5_APP_TAG,
+ MLX5_COPY_MARK,
+};
+
/* Pattern outer Layer bits. */
#define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0)
#define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1)
@@ -534,6 +546,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_query_t query;
};
+
#define MLX5_CNT_CONTAINER(sh, batch, thread) (&(sh)->cmng.ccont \
[(((sh)->cmng.mhi[batch] >> (thread)) & 0x1) * 2 + (batch)])
#define MLX5_CNT_CONTAINER_UNUSED(sh, batch, thread) (&(sh)->cmng.ccont \
@@ -554,6 +567,10 @@ uint64_t mlx5_flow_hashfields_adjust(struct mlx5_flow *dev_flow, int tunnel,
uint64_t hash_fields);
uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
uint32_t subpriority);
+enum modify_reg mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error);
const struct rte_flow_action *mlx5_flow_find_action
(const struct rte_flow_action *actions,
enum rte_flow_action_type action);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 13/19] net/mlx5: add flow tag support
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (11 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
` (5 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add support of new rte_flow item and action - TAG and SET_TAG. TAG is
a transient value which can be kept during flow matching.
This is supported through device metadata register reg_c[]. Although
there are 8 registers are available on the current mlx5 device,
some of them can be reserved for firmware or kernel purposes.
The availability should be queried by iterative trial-and-error
mlx5_flow_discover_mreg_c() routine.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 232 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 228 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 80280ab..fec2efe 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -872,10 +872,12 @@ struct field_modify_info modify_tcp[] = {
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"too many items to modify");
+ assert(conf->id != REG_NONE);
+ assert(conf->id < RTE_DIM(reg_to_field));
actions[i].action_type = MLX5_MODIFICATION_TYPE_SET;
actions[i].field = reg_to_field[conf->id];
actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- actions[i].data1 = conf->data;
+ actions[i].data1 = rte_cpu_to_be_32(conf->data);
++i;
resource->actions_num = i;
if (!resource->actions_num)
@@ -886,6 +888,52 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert SET_TAG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_tag
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_action_set_tag *conf,
+ struct rte_flow_error *error)
+{
+ rte_be32_t data = rte_cpu_to_be_32(conf->data);
+ rte_be32_t mask = rte_cpu_to_be_32(conf->mask);
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum mlx5_modification_field reg_type;
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ assert((unsigned int)ret < RTE_DIM(reg_to_field));
+ reg_type = reg_to_field[ret];
+ assert(reg_type > 0);
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_type};
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Convert internal COPY_REG action to DV specification.
*
* @param[in] dev
@@ -1016,6 +1064,65 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate TAG item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tag *spec = item->spec;
+ const struct rte_flow_item_tag *mask = item->mask;
+ const struct rte_flow_item_tag nic_mask = {
+ .data = RTE_BE32(UINT32_MAX),
+ .index = 0xff,
+ };
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extensive metadata register"
+ " isn't supported");
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (!mask)
+ mask = &rte_flow_item_tag_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_tag),
+ error);
+ if (ret < 0)
+ return ret;
+ if (mask->index != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
+ "partial mask for tag index"
+ " is not supported");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, spec->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ return 0;
+}
+
+/**
* Validate vport item.
*
* @param[in] dev
@@ -1376,6 +1483,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_TAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_tag *conf;
+ const uint64_t terminal_action_flags =
+ MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE |
+ MLX5_FLOW_ACTION_RSS;
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extensive metadata register"
+ " isn't supported");
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_tag *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ if (!attr->transfer && attr->ingress &&
+ (action_flags & terminal_action_flags))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "set_tag has no effect"
+ " with terminal actions");
+ return 0;
+}
+
+/**
* Validate count action.
*
* @param[in] dev
@@ -3765,6 +3928,13 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ ret = flow_dv_validate_item_tag(dev, items,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
break;
@@ -3812,6 +3982,17 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK;
++actions_n;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ ret = flow_dv_validate_action_set_tag(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
ret = mlx5_flow_validate_action_drop(action_flags,
attr, error);
@@ -5099,8 +5280,38 @@ struct field_modify_info modify_tcp[] = {
{
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
- enum modify_reg reg = tag_v->id;
+ assert(tag_v);
+ flow_dv_match_meta_reg(matcher, key, tag_v->id, tag_v->data,
+ tag_m ? tag_m->data : UINT32_MAX);
+}
+
+/**
+ * Add TAG item to matcher
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_tag(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ const struct rte_flow_item_tag *tag_v = item->spec;
+ const struct rte_flow_item_tag *tag_m = item->mask;
+ enum modify_reg reg;
+
+ assert(tag_v);
+ tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask;
+ /* Get the metadata register index for the tag. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL);
+ assert(reg > 0);
flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
@@ -5775,6 +5986,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ if (flow_dv_convert_action_set_tag
+ (dev, &mhdr_res,
+ (const struct rte_flow_action_set_tag *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
@@ -6055,7 +6274,7 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
- if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
+ if (mhdr_res.actions_num) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
(dev, &mhdr_res, dev_flow, error))
@@ -6067,7 +6286,7 @@ struct field_modify_info modify_tcp[] = {
default:
break;
}
- if ((action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) &&
+ if (mhdr_res.actions_num &&
modify_action_position == UINT32_MAX)
modify_action_position = actions_n++;
}
@@ -6230,6 +6449,11 @@ struct field_modify_info modify_tcp[] = {
items, tunnel);
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ flow_dv_translate_item_tag(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
flow_dv_translate_mlx5_item_tag(match_mask,
match_value, items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 14/19] net/mlx5: extend flow mark support
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (12 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
` (4 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Flow MARK item is newly supported along with MARK action. MARK
action and item are supported on both Rx and Tx. It works on the
metadata reg_c[] only if extensive flow metadata register is
supported. Without the support, MARK action behaves same as
before - valid only on Rx and no MARK item is valid.
FLAG action is also modified accordingly. FLAG action is
supported on both Rx and Tx via reg_c[] if extensive flow
metadata register is supported.
However, the new MARK/FLAG item and action are currently
disabled until register copy on loopback is supported by
forthcoming patches.
The actual index of engaged metadata reg_c[] register to
support FLAG/MARK actions depends on dv_xmeta_en devarg value.
For extensive metadata mode 1 the reg_c[1] is used and
transitive MARK data width is 24. For extensive metadata mode 2
the reg_c[0] is used and transitive MARK data width might be
restricted to 0 or 16 bits, depending on kernel usage of reg_c[0].
The actual supported width can be discovered by series of trials
with rte_flow_validate().
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 5 +-
drivers/net/mlx5/mlx5_flow_dv.c | 383 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 370 insertions(+), 18 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9371e11..d6209ff 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -102,6 +102,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ITEM_METADATA (1u << 16)
#define MLX5_FLOW_ITEM_PORT_ID (1u << 17)
#define MLX5_FLOW_ITEM_TAG (1u << 18)
+#define MLX5_FLOW_ITEM_MARK (1u << 19)
/* Pattern MISC bits. */
#define MLX5_FLOW_LAYER_ICMP (1u << 19)
@@ -194,6 +195,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_INC_TCP_ACK (1u << 30)
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
+#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -228,7 +230,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_INC_TCP_ACK | \
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
- MLX5_FLOW_ACTION_SET_TAG)
+ MLX5_FLOW_ACTION_SET_TAG | \
+ MLX5_FLOW_ACTION_MARK_EXT)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index fec2efe..ec13edc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1010,6 +1010,125 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert MARK action to DV specification. This routine is used
+ * in extensive metadata only and requires metadata register to be
+ * handled. In legacy mode hardware tag resource is engaged.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] conf
+ * Pointer to MARK action specification.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action_mark *conf,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ rte_be32_t mask = rte_cpu_to_be_32(MLX5_FLOW_MARK_MASK &
+ priv->sh->dv_mark_mask);
+ rte_be32_t data = rte_cpu_to_be_32(conf->id) & mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ {4, 0, 0}, /* dynamic instead of MLX5_MODI_META_REG_C_1. */
+ {0, 0, 0},
+ };
+ enum modify_reg reg;
+
+ if (!mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "zero mark action mask");
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (reg < 0)
+ return reg;
+ assert(reg > 0);
+ reg_c_x[0].id = reg_to_field[reg];
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
+ * Validate MARK item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_item_mark *spec = item->spec;
+ const struct rte_flow_item_mark *mask = item->mask;
+ const struct rte_flow_item_mark nic_mask = {
+ .id = priv->sh->dv_mark_mask,
+ };
+ int ret;
+
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata feature"
+ " isn't enabled");
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ if (!nic_mask.id)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (spec->id >= (MLX5_FLOW_MARK_MAX & nic_mask.id))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &spec->id,
+ "mark id exceeds the limit");
+ if (!mask)
+ mask = &nic_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_mark),
+ error);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -1482,6 +1601,139 @@ struct field_modify_info modify_tcp[] = {
return 0;
}
+/*
+ * Validate the FLAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_flag(struct rte_eth_dev *dev,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_flag(action_flags, attr,
+ error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support flag action");
+ if (!(priv->sh->dv_mark_mask & MLX5_FLOW_MARK_DEFAULT))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't mark and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 flag"
+ " actions in same flow");
+ return 0;
+}
+
+/**
+ * Validate MARK action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action_mark *mark = action->conf;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_mark(action, action_flags,
+ attr, error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support mark action");
+ if (!priv->sh->dv_mark_mask)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (!mark)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ if (mark->id >= (MLX5_FLOW_MARK_MAX & priv->sh->dv_mark_mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &mark->id,
+ "mark id exceeds the limit");
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't flag and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 mark actions in same"
+ " flow");
+ return 0;
+}
+
/**
* Validate SET_TAG action.
*
@@ -3749,6 +4001,8 @@ struct field_modify_info modify_tcp[] = {
.dst_port = RTE_BE16(UINT16_MAX),
}
};
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
if (items == NULL)
return -1;
@@ -3905,6 +4159,14 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ ret = flow_dv_validate_item_mark(dev, items, attr,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
ret = flow_dv_validate_item_meta(dev, items, attr,
error);
@@ -3966,21 +4228,39 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
- ret = mlx5_flow_validate_action_flag(action_flags,
- attr, error);
+ ret = flow_dv_validate_action_flag(dev, action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_FLAG |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_MARK:
- ret = mlx5_flow_validate_action_mark(actions,
- action_flags,
- attr, error);
+ ret = flow_dv_validate_action_mark(dev, actions,
+ action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_MARK;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_MARK |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
@@ -4251,12 +4531,14 @@ struct field_modify_info modify_tcp[] = {
" actions in the same rule");
/* Eswitch has few restrictions on using items and actions */
if (attr->transfer) {
- if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_FLAG)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
"unsupported action FLAG");
- if (action_flags & MLX5_FLOW_ACTION_MARK)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_MARK)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
@@ -5219,6 +5501,44 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add MARK item to matcher
+ *
+ * @param[in] dev
+ * The device to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_mark(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_mark *mark;
+ uint32_t value;
+ uint32_t mask;
+
+ mark = item->mask ? (const void *)item->mask :
+ &rte_flow_item_mark_mask;
+ mask = mark->id & priv->sh->dv_mark_mask;
+ mark = (const void *)item->spec;
+ assert(mark);
+ value = mark->id & priv->sh->dv_mark_mask & mask;
+ if (mask) {
+ enum modify_reg reg;
+
+ /* Get the metadata register index for the mark. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, NULL);
+ assert(reg > 0);
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -5227,8 +5547,6 @@ struct field_modify_info modify_tcp[] = {
* Flow matcher value.
* @param[in] item
* Flow pattern to translate.
- * @param[in] inner
- * Item is inner pattern.
*/
static void
flow_dv_translate_item_meta(void *matcher, void *key,
@@ -5899,6 +6217,7 @@ struct field_modify_info modify_tcp[] = {
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
struct rte_flow *flow = dev_flow->flow;
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -5933,7 +6252,7 @@ struct field_modify_info modify_tcp[] = {
if (attr->transfer)
mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
- priority = priv->config.flow_prio - 1;
+ priority = dev_conf->flow_prio - 1;
for (; !actions_end ; actions++) {
const struct rte_flow_action_queue *queue;
const struct rte_flow_action_rss *rss;
@@ -5964,6 +6283,19 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_PORT_ID;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ struct rte_flow_action_mark mark = {
+ .id = MLX5_FLOW_MARK_DEFAULT,
+ };
+
+ if (flow_dv_convert_action_mark(dev, &mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
if (!dev_flow->dv.tag_resource)
@@ -5972,9 +6304,22 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ const struct rte_flow_action_mark *mark =
+ (const struct rte_flow_action_mark *)
+ actions->conf;
+
+ if (flow_dv_convert_action_mark(dev, mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
+ /* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
@@ -5984,7 +6329,6 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
@@ -6021,7 +6365,7 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
- if (!priv->config.devx) {
+ if (!dev_conf->devx) {
rte_errno = ENOTSUP;
goto cnt_err;
}
@@ -6434,6 +6778,11 @@ struct field_modify_info modify_tcp[] = {
items, last_item, tunnel);
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ flow_dv_translate_item_mark(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
flow_dv_translate_item_meta(match_mask, match_value,
items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 15/19] net/mlx5: extend flow meta data support
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (13 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
` (3 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
META item is supported on both Rx and Tx. 'transfer' attribute
is also supported. SET_META action is also added.
Due to restriction on reg_c[meta], various bit width might be
available. If devarg parameter dv_xmeta_en=1, the META uses
metadata register reg_c[0], which may be required for internal
kernel or firmware needs. In this case PMD queries kernel about
available fields in reg_c[0] and restricts the register usage
accordingly. If devarg parameter dv_xmeta_en=2, the META feature
uses reg_c[1], there should be no limitations on the data width.
However, extensive MEAT feature is currently disabled until
register copy on loopback is supported by forthcoming patches.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 4 +-
drivers/net/mlx5/mlx5_flow_dv.c | 255 +++++++++++++++++++++++++++++++++++++---
2 files changed, 240 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index d6209ff..ef16aef 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -196,6 +196,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
+#define MLX5_FLOW_ACTION_SET_META (1ull << 34)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -231,7 +232,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
MLX5_FLOW_ACTION_SET_TAG | \
- MLX5_FLOW_ACTION_MARK_EXT)
+ MLX5_FLOW_ACTION_MARK_EXT | \
+ MLX5_FLOW_ACTION_SET_META)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ec13edc..60ebbca 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1060,6 +1060,103 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get metadata register index for specified steering domain.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] attr
+ * Attributes of flow to determine steering domain.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * positive index on success, a negative errno value otherwise
+ * and rte_errno is set.
+ */
+static enum modify_reg
+flow_dv_get_metadata_reg(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ enum modify_reg reg =
+ mlx5_flow_get_reg_id(dev, attr->transfer ?
+ MLX5_METADATA_FDB :
+ attr->egress ?
+ MLX5_METADATA_TX :
+ MLX5_METADATA_RX, 0, error);
+ if (reg < 0)
+ return rte_flow_error_set(error,
+ ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL, "unavailable "
+ "metadata register");
+ return reg;
+}
+
+/**
+ * Convert SET_META action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_meta
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action_set_meta *conf,
+ struct rte_flow_error *error)
+{
+ uint32_t data = conf->data;
+ uint32_t mask = conf->mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum modify_reg reg = flow_dv_get_metadata_reg(dev, attr, error);
+
+ if (reg < 0)
+ return reg;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0;
+
+ assert(msk_c0);
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ shl_c0 = rte_bsf32(msk_c0);
+#else
+ shl_c0 = sizeof(msk_c0) * CHAR_BIT - rte_fls_u32(msk_c0);
+#endif
+ mask <<= shl_c0;
+ data <<= shl_c0;
+ assert(!(~msk_c0 & rte_cpu_to_be_32(mask)));
+ }
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_to_field[reg]};
+ /* The routine expects parameters in memory as big-endian ones. */
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Validate MARK item.
*
* @param[in] dev
@@ -1149,11 +1246,14 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_attr *attr,
struct rte_flow_error *error)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
const struct rte_flow_item_meta *spec = item->spec;
const struct rte_flow_item_meta *mask = item->mask;
- const struct rte_flow_item_meta nic_mask = {
+ struct rte_flow_item_meta nic_mask = {
.data = UINT32_MAX
};
+ enum modify_reg reg;
int ret;
if (!spec)
@@ -1163,23 +1263,32 @@ struct field_modify_info modify_tcp[] = {
"data cannot be empty");
if (!spec->data)
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
- NULL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
"data cannot be zero");
+ if (config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg == REG_B)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "match on reg_b "
+ "isn't supported");
+ if (reg != REG_A)
+ nic_mask.data = priv->sh->dv_meta_mask;
+ }
if (!mask)
mask = &rte_flow_item_meta_mask;
ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
(const uint8_t *)&nic_mask,
sizeof(struct rte_flow_item_meta),
error);
- if (ret < 0)
- return ret;
- if (attr->ingress)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
- NULL,
- "pattern not supported for ingress");
- return 0;
+ return ret;
}
/**
@@ -1735,6 +1844,67 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_META action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_meta(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags __rte_unused,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_meta *conf;
+ uint32_t nic_mask = UINT32_MAX;
+ enum modify_reg reg;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg != REG_A && reg != REG_B) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ nic_mask = priv->sh->dv_meta_mask;
+ }
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_meta *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ if (conf->mask & ~nic_mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "meta data must be within reg C0");
+ if (!(conf->data & conf->mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero value has no effect");
+ return 0;
+}
+
+/**
* Validate SET_TAG action.
*
* @param[in] dev
@@ -4262,6 +4432,17 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
}
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ ret = flow_dv_validate_action_set_meta(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
action_flags,
@@ -5541,15 +5722,21 @@ struct field_modify_info modify_tcp[] = {
/**
* Add META item to matcher
*
+ * @param[in] dev
+ * The devich to configure through.
* @param[in, out] matcher
* Flow matcher.
* @param[in, out] key
* Flow matcher value.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
* @param[in] item
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_meta(void *matcher, void *key,
+flow_dv_translate_item_meta(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_attr *attr,
const struct rte_flow_item *item)
{
const struct rte_flow_item_meta *meta_m;
@@ -5559,10 +5746,34 @@ struct field_modify_info modify_tcp[] = {
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v)
- flow_dv_match_meta_reg(matcher, key, REG_A,
- rte_cpu_to_be_32(meta_v->data),
- rte_cpu_to_be_32(meta_m->data));
+ if (meta_v) {
+ enum modify_reg reg;
+ uint32_t value = meta_v->data;
+ uint32_t mask = meta_m->data;
+
+ reg = flow_dv_get_metadata_reg(dev, attr, NULL);
+ if (reg < 0)
+ return;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ value = rte_cpu_to_be_32(value);
+ mask = rte_cpu_to_be_32(mask);
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0 = rte_bsf32(msk_c0);
+
+ msk_c0 = rte_cpu_to_be_32(msk_c0);
+ value <<= shl_c0;
+ mask <<= shl_c0;
+ assert(msk_c0);
+ assert(!(~msk_c0 & mask));
+ }
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
}
/**
@@ -6330,6 +6541,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ if (flow_dv_convert_action_set_meta
+ (dev, &mhdr_res, attr,
+ (const struct rte_flow_action_set_meta *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
(dev, &mhdr_res,
@@ -6784,8 +7003,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_ITEM_MARK;
break;
case RTE_FLOW_ITEM_TYPE_META:
- flow_dv_translate_item_meta(match_mask, match_value,
- items);
+ flow_dv_translate_item_meta(dev, match_mask,
+ match_value, attr, items);
last_item = MLX5_FLOW_ITEM_METADATA;
break;
case RTE_FLOW_ITEM_TYPE_ICMP:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 16/19] net/mlx5: add meta data support to Rx datapath
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (14 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
` (2 subsequent siblings)
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
This patch moves metadata from completion descriptor
to appropriate dynamic mbuf field.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_prm.h | 6 ++++--
drivers/net/mlx5/mlx5_rxtx.c | 5 +++++
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +++++++++++++++++++++++--
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +++++++++++++++++++++++----
5 files changed, 78 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b405cb6..a0c37c8 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -357,12 +357,14 @@ struct mlx5_cqe {
uint16_t hdr_type_etc;
uint16_t vlan_info;
uint8_t lro_num_seg;
- uint8_t rsvd3[11];
+ uint8_t rsvd3[3];
+ uint32_t flow_table_metadata;
+ uint8_t rsvd4[4];
uint32_t byte_cnt;
uint64_t timestamp;
uint32_t sop_drop_qpn;
uint16_t wqe_counter;
- uint8_t rsvd4;
+ uint8_t rsvd5;
uint8_t op_own;
};
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 887e283..f28a909 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -26,6 +26,7 @@
#include <rte_branch_prediction.h>
#include <rte_ether.h>
#include <rte_cycles.h>
+#include <rte_flow.h>
#include "mlx5.h"
#include "mlx5_utils.h"
@@ -1251,6 +1252,10 @@ enum mlx5_txcmp_code {
pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
}
}
+ if (rte_flow_dynf_metadata_avail() && cqe->flow_table_metadata) {
+ pkt->ol_flags |= PKT_RX_DYNF_METADATA;
+ *RTE_FLOW_DYNF_METADATA(pkt) = cqe->flow_table_metadata;
+ }
if (rxq->csum)
pkt->ol_flags |= rxq_cq_to_ol_flags(cqe);
if (rxq->vlan_strip &&
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 3be3a6d..8e79883 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -416,7 +416,6 @@
vec_cmpeq((vector unsigned int)flow_tag,
(vector unsigned int)pinfo_ft_mask)));
}
-
/*
* Merge the two fields to generate the following:
* bit[1] = l3_ok
@@ -1011,7 +1010,29 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
-
+ if (rte_flow_dynf_metadata_avail()) {
+ uint64_t flag = rte_flow_dynf_metadata_mask;
+ int offs = rte_flow_dynf_metadata_offs;
+ uint32_t metadata;
+
+ /* This code is subject for futher optimization. */
+ metadata = cq[pos].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos], offs, uint32_t *) =
+ metadata;
+ pkts[pos]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 1].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 1], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 1]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 2].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 2], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 2]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 3].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 3], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 3]->ol_flags |= metadata ? flag : 0ULL;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vec_perm(op_own, zero, len_shuf_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index e914d01..86785c7 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -687,6 +687,29 @@
container_of(p3, struct mlx5_cqe,
pkt_info)->timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(elts[pos]) =
+ container_of(p0, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 1]) =
+ container_of(p1, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 2]) =
+ container_of(p2, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 3]) =
+ container_of(p3, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos]))
+ elts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 1]))
+ elts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 2]))
+ elts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 3]))
+ elts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vbic_u16(byte_cnt, invalid_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index ca8ed41..35b7761 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -537,8 +537,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].csum);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x30);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd4[2]);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x04);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -564,8 +564,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].csum);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x30);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd4[2]);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x04);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -640,6 +640,25 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(pkts[pos]) =
+ cq[pos].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 1]) =
+ cq[pos + p1].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 2]) =
+ cq[pos + p2].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 3]) =
+ cq[pos + p3].flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos]))
+ pkts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 1]))
+ pkts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 2]))
+ pkts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 3]))
+ pkts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = _mm_shuffle_epi8(op_own, len_shuf_mask);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 17/19] net/mlx5: introduce flow splitters chain
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (15 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The mlx5 hardware has some limitations and flow might
require to be split into multiple internal subflows.
For example this is needed to provide the meter object
sharing between multiple flows or to provide metadata
register copying before final queue/rss action.
The multiple features might require several level of
splitting. For example, hairpin feature splits the
original flow into two ones - rx and tx parts. Then
RSS feature should split rx part into multiple subflows
with extended item sets. Then, metering feature might
require splitting each RSS subflow into meter jump
chain, and then metadata extensive support might
require the final subflows splitting. So, we have
to organize the chain of splitting subroutines to
abstract each level of splitting.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 116 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 106 insertions(+), 10 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b87657a..d97a0b2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2786,6 +2786,103 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * The last stage of splitting chain, just creates the subflow
+ * without any modification.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in, out] sub_flow
+ * Pointer to return the created subflow, may be NULL.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_inner(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct mlx5_flow **sub_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_flow *dev_flow;
+
+ dev_flow = flow_drv_prepare(flow, attr, items, actions, error);
+ if (!dev_flow)
+ return -rte_errno;
+ dev_flow->flow = flow;
+ dev_flow->external = external;
+ /* Subflow object was created, we must include one in the list. */
+ LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
+ if (sub_flow)
+ *sub_flow = dev_flow;
+ return flow_drv_translate(dev, dev_flow, attr, items, actions, error);
+}
+
+/**
+ * Split the flow to subflow set. The splitters might be linked
+ * in the chain, like this:
+ * flow_create_split_outer() calls:
+ * flow_create_split_meter() calls:
+ * flow_create_split_metadata(meter_subflow_0) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ * flow_create_split_metadata(meter_subflow_1) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ *
+ * This provide flexible way to add new levels of flow splitting.
+ * The all of successfully created subflows are included to the
+ * parent flow dev_flow list.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_outer(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ int ret;
+
+ ret = flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ assert(ret <= 0);
+ return ret;
+}
+
+/**
* Create a flow and add it to @p list.
*
* @param dev
@@ -2903,16 +3000,15 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
buf->entry[0].pattern = (void *)(uintptr_t)items;
}
for (i = 0; i < buf->entries; ++i) {
- dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
- p_actions_rx, error);
- if (!dev_flow)
- goto error;
- dev_flow->flow = flow;
- dev_flow->external = external;
- LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
- ret = flow_drv_translate(dev, dev_flow, attr,
- buf->entry[i].pattern,
- p_actions_rx, error);
+ /*
+ * The splitter may create multiple dev_flows,
+ * depending on configuration. In the simplest
+ * case it just creates unmodified original flow.
+ */
+ ret = flow_create_split_outer(dev, flow, attr,
+ buf->entry[i].pattern,
+ p_actions_rx, external,
+ error);
if (ret < 0)
goto error;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 18/19] net/mlx5: split Rx flows to provide metadata copy
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (16 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Values set by MARK and SET_META actions should be carried over
to the VF representor in case of flow miss on Tx path. However,
as not all metadata registers are preserved across the different
domains (NIC Rx/Tx and E-Switch FDB), as a workaround, those
values should be carried by reg_c's which are preserved across
domains and copied to STE flow_tag (MARK) and reg_b (META) fields
in the last stage of flow steering, in order to scatter those
values to flow_tag and flow_table_metadata of CQE.
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it should
be manually set by a flow per MARK ID. For this purpose, there
should be a dedicated flow table - RX_CP_TBL and all the Rx flow
should pass by the table to properly copy values.
As the last action of Rx flow steering must be a terminal action
such as QUEUE, RSS or DROP, if a user flow has Q/RSS action, the
flow must be split in order to pass by the RX_CP_TBL. And the
remained Q/RSS action will be performed by another dedicated
action table - RX_ACT_TBL.
For example, for an ingress flow:
pattern,
actions_having_QRSS
it must be split into two flows. The first one is,
pattern,
actions_except_QRSS / copy (reg_c[2] := flow_id) / jump to RX_CP_TBL
and the second one in RX_ACT_TBL.
(if reg_c[2] == flow_id),
action_QRSS
where flow_id is uniquely allocated and managed identifier.
This patch implements the Rx flow splitting and build the RX_ACT_TBL.
Also, per each egress flow on NIC Tx, a copy action (reg_c[]= reg_a)
should be added in order to transfer metadata from WQE.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 8 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.c | 428 ++++++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 1 +
4 files changed, 436 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index fb7b94b..6359bc9 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2411,6 +2411,12 @@ struct mlx5_flow_id_pool *
err = mlx5_alloc_shared_dr(priv);
if (err)
goto error;
+ priv->qrss_id_pool = mlx5_flow_id_pool_alloc();
+ if (!priv->qrss_id_pool) {
+ DRV_LOG(ERR, "can't create flow id pool");
+ err = ENOMEM;
+ goto error;
+ }
}
/* Supported Verbs flow priority number detection. */
err = mlx5_flow_discover_priorities(eth_dev);
@@ -2463,6 +2469,8 @@ struct mlx5_flow_id_pool *
close(priv->nl_socket_rdma);
if (priv->vmwa_context)
mlx5_vlan_vmwa_exit(priv->vmwa_context);
+ if (priv->qrss_id_pool)
+ mlx5_flow_id_pool_release(priv->qrss_id_pool);
if (own_domain_id)
claim_zero(rte_eth_switch_domain_free(priv->domain_id));
rte_free(priv);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 92d445a..9c1a88a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -733,6 +733,7 @@ struct mlx5_priv {
uint32_t nl_sn; /* Netlink message sequence number. */
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
+ struct mlx5_flow_id_pool *qrss_id_pool;
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d97a0b2..2f6ace0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2222,6 +2222,49 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Allocate unique ID for the split Q/RSS subflows. */
+static uint32_t
+flow_qrss_get_id(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t qrss_id, ret;
+
+ ret = mlx5_flow_id_get(priv->qrss_id_pool, &qrss_id);
+ if (ret)
+ return 0;
+ assert(qrss_id);
+ return qrss_id;
+}
+
+/* Free unique ID for the split Q/RSS subflows. */
+static void
+flow_qrss_free_id(struct rte_eth_dev *dev, uint32_t qrss_id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ if (qrss_id)
+ mlx5_flow_id_release(priv->qrss_id_pool, qrss_id);
+}
+
+/**
+ * Release resource related QUEUE/RSS action split.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Flow to release id's from.
+ */
+static void
+flow_mreg_split_qrss_release(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow *dev_flow;
+
+ LIST_FOREACH(dev_flow, &flow->dev_flows, next)
+ if (dev_flow->qrss_id)
+ flow_qrss_free_id(dev, dev_flow->qrss_id);
+}
+
static int
flow_null_validate(struct rte_eth_dev *dev __rte_unused,
const struct rte_flow_attr *attr __rte_unused,
@@ -2511,6 +2554,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
const struct mlx5_flow_driver_ops *fops;
enum mlx5_flow_drv_type type = flow->drv_type;
+ flow_mreg_split_qrss_release(dev, flow);
assert(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX);
fops = flow_get_drv_ops(type);
fops->destroy(dev, flow);
@@ -2581,6 +2625,41 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Get QUEUE/RSS action from the action list.
+ *
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] qrss
+ * Pointer to the return pointer.
+ * @param[out] qrss_type
+ * Pointer to the action type to return. RTE_FLOW_ACTION_TYPE_END is returned
+ * if no QUEUE/RSS is found.
+ *
+ * @return
+ * Total number of actions.
+ */
+static int
+flow_parse_qrss_action(const struct rte_flow_action actions[],
+ const struct rte_flow_action **qrss)
+{
+ int actions_n = 0;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ *qrss = actions;
+ break;
+ default:
+ break;
+ }
+ actions_n++;
+ }
+ /* Count RTE_FLOW_ACTION_TYPE_END. */
+ return actions_n + 1;
+}
+
+/**
* Check if the flow should be splited due to hairpin.
* The reason for the split is that in current HW we can't
* support encap on Rx, so if a flow have encap we move it
@@ -2832,6 +2911,351 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Split action list having QUEUE/RSS for metadata register copy.
+ *
+ * Once Q/RSS action is detected in user's action list, the flow action
+ * should be split in order to copy metadata registers, which will happen in
+ * RX_CP_TBL like,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * The Q/RSS action will be performed on RX_ACT_TBL after passing by RX_CP_TBL.
+ * This is because the last action of each flow must be a terminal action
+ * (QUEUE, RSS or DROP).
+ *
+ * Flow ID must be allocated to identify actions in the RX_ACT_TBL and it is
+ * stored and kept in the mlx5_flow structure per each sub_flow.
+ *
+ * The Q/RSS action is replaced with,
+ * - SET_TAG, setting the allocated flow ID to reg_c[2].
+ * And the following JUMP action is added at the end,
+ * - JUMP, to RX_CP_TBL.
+ *
+ * A flow to perform remained Q/RSS action will be created in RX_ACT_TBL by
+ * flow_create_split_metadata() routine. The flow will look like,
+ * - If flow ID matches (reg_c[2]), perform Q/RSS.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] split_actions
+ * Pointer to store split actions to jump to CP_TBL.
+ * @param[in] actions
+ * Pointer to the list of original flow actions.
+ * @param[in] qrss
+ * Pointer to the Q/RSS action.
+ * @param[in] actions_n
+ * Number of original actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * non-zero unique flow_id on success, otherwise 0 and
+ * error/rte_error are set.
+ */
+static uint32_t
+flow_mreg_split_qrss_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *split_actions,
+ const struct rte_flow_action *actions,
+ const struct rte_flow_action *qrss,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_rte_flow_action_set_tag *set_tag;
+ struct rte_flow_action_jump *jump;
+ const int qrss_idx = qrss - actions;
+ uint32_t flow_id;
+ int ret = 0;
+
+ /*
+ * Given actions will be split
+ * - Replace QUEUE/RSS action with SET_TAG to set flow ID.
+ * - Add jump to mreg CP_TBL.
+ * As a result, there will be one more action.
+ */
+ ++actions_n;
+ /*
+ * Allocate the new subflow ID. This one is unique within
+ * device and not shared with representors. Otherwise,
+ * we would have to resolve multi-thread access synch
+ * issue. Each flow on the shared device is appended
+ * with source vport identifier, so the resulting
+ * flows will be unique in the shared (by master and
+ * representors) domain even if they have coinciding
+ * IDs.
+ */
+ flow_id = flow_qrss_get_id(dev);
+ if (!flow_id)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "can't allocate id "
+ "for split Q/RSS subflow");
+ /* Internal SET_TAG action to set flow ID. */
+ set_tag = (void *)(split_actions + actions_n);
+ *set_tag = (struct mlx5_rte_flow_action_set_tag){
+ .data = flow_id,
+ };
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ set_tag->id = ret;
+ /* JUMP action to jump to mreg copy table (CP_TBL). */
+ jump = (void *)(set_tag + 1);
+ *jump = (struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ };
+ /* Construct new actions array. */
+ memcpy(split_actions, actions, sizeof(*split_actions) * actions_n);
+ /* Replace QUEUE/RSS action. */
+ split_actions[qrss_idx] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ .conf = set_tag,
+ };
+ split_actions[actions_n - 2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = jump,
+ };
+ split_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return flow_id;
+}
+
+/**
+ * Extend the given action list for Tx metadata copy.
+ *
+ * Copy the given action list to the ext_actions and add flow metadata register
+ * copy action in order to copy reg_a set by WQE to reg_c[0].
+ *
+ * @param[out] ext_actions
+ * Pointer to the extended action list.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[in] actions_n
+ * Number of actions in the list.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_mreg_tx_copy_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *ext_actions,
+ const struct rte_flow_action *actions,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_flow_action_copy_mreg *cp_mreg =
+ (struct mlx5_flow_action_copy_mreg *)
+ (ext_actions + actions_n + 1);
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->dst = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_TX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->src = ret;
+ memcpy(ext_actions, actions,
+ sizeof(*ext_actions) * actions_n);
+ ext_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = cp_mreg,
+ };
+ ext_actions[actions_n] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return 0;
+}
+
+/**
+ * The splitting for metadata feature.
+ *
+ * - Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ *
+ * - All the actions on NIC Tx should have a mreg copy action to
+ * copy reg_a from WQE to reg_c[0].
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_metadata(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action *qrss = NULL;
+ struct rte_flow_action *ext_actions = NULL;
+ struct mlx5_flow *dev_flow = NULL;
+ uint32_t qrss_id = 0;
+ size_t act_size;
+ int actions_n;
+ int ret;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev))
+ return flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ actions_n = flow_parse_qrss_action(actions, &qrss);
+ if (qrss) {
+ /* Exclude hairpin flows from splitting. */
+ if (qrss->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *queue;
+
+ queue = qrss->conf;
+ if (mlx5_rxq_get_type(dev, queue->index) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ } else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ const struct rte_flow_action_rss *rss;
+
+ rss = qrss->conf;
+ if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ }
+ }
+ if (qrss) {
+ /*
+ * Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct rte_flow_action_set_tag) +
+ sizeof(struct rte_flow_action_jump);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /*
+ * Create the new actions list with removed Q/RSS action
+ * and appended set tag and jump to register copy table
+ * (RX_CP_TBL). We should preallocate unique tag ID here
+ * in advance, because it is needed for set tag action.
+ */
+ qrss_id = flow_mreg_split_qrss_prep(dev, ext_actions, actions,
+ qrss, actions_n, error);
+ if (!qrss_id) {
+ ret = -rte_errno;
+ goto exit;
+ }
+ } else if (attr->egress && !attr->transfer) {
+ /*
+ * All the actions on NIC Tx should have a metadata register
+ * copy action to copy reg_a from WQE to reg_c[meta]
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct mlx5_flow_action_copy_mreg);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /* Create the action list appended with copy register. */
+ ret = flow_mreg_tx_copy_prep(dev, ext_actions, actions,
+ actions_n, error);
+ if (ret < 0)
+ goto exit;
+ }
+ /* Add the unmodified original or prefix subflow. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow, attr, items,
+ ext_actions ? ext_actions : actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ if (qrss_id) {
+ const struct rte_flow_attr q_attr = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ .ingress = 1,
+ };
+ /* Internal PMD action to set register. */
+ struct mlx5_rte_flow_item_tag q_tag_spec = {
+ .data = qrss_id,
+ .id = 0,
+ };
+ struct rte_flow_item q_items[] = {
+ {
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &q_tag_spec,
+ .last = NULL,
+ .mask = NULL,
+ },
+ {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action q_actions[] = {
+ {
+ .type = qrss->type,
+ .conf = qrss->conf,
+ },
+ {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ uint64_t hash_fields = dev_flow->hash_fields;
+ /*
+ * Put unique id in prefix flow due to it is destroyed after
+ * prefix flow and id will be freed after there is no actual
+ * flows with this id and identifier reallocation becomes
+ * possible (for example, for other flows in other threads).
+ */
+ dev_flow->qrss_id = qrss_id;
+ qrss_id = 0;
+ dev_flow = NULL;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ goto exit;
+ q_tag_spec.id = ret;
+ /* Add suffix subflow to execute Q/RSS. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow,
+ &q_attr, q_items, q_actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ dev_flow->hash_fields = hash_fields;
+ }
+
+exit:
+ /*
+ * We do not destroy the partially created sub_flows in case of error.
+ * These ones are included into parent flow list and will be destroyed
+ * by flow_drv_destroy.
+ */
+ flow_qrss_free_id(dev, qrss_id);
+ rte_free(ext_actions);
+ return ret;
+}
+
+/**
* Split the flow to subflow set. The splitters might be linked
* in the chain, like this:
* flow_create_split_outer() calls:
@@ -2876,8 +3300,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
{
int ret;
- ret = flow_create_split_inner(dev, flow, NULL, attr, items,
- actions, external, error);
+ ret = flow_create_split_metadata(dev, flow, attr, items,
+ actions, external, error);
assert(ret <= 0);
return ret;
}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index ef16aef..c71938b 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -500,6 +500,7 @@ struct mlx5_flow {
#endif
struct mlx5_flow_verbs verbs;
};
+ uint32_t qrss_id; /**< Uniqie Q/RSS suffix subflow tag. */
bool external; /**< true if the flow is created external to PMD. */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 19/19] net/mlx5: add metadata register copy table
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
` (17 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
@ 2019-11-06 17:37 ` Viacheslav Ovsiienko
18 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-06 17:37 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it
should be manually set by a flow per each unique MARK ID. For
this purpose, there should be a dedicated flow table -
RX_CP_TBL and all the Rx flow should pass by the table
to properly copy values from the register to flow tag field.
And for each MARK action, a copy flow should be added
to RX_CP_TBL according to the MARK ID like:
(if reg_c[mark] == mark_id),
flow_tag := mark_id / reg_b := reg_c[meta] / jump to RX_ACT_TBL
For SET_META action, there can be only one default flow like:
reg_b := reg_c[meta] / jump to RX_ACT_TBL
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 17 ++
drivers/net/mlx5/mlx5.h | 7 +-
drivers/net/mlx5/mlx5_defs.h | 4 +
drivers/net/mlx5/mlx5_flow.c | 443 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 19 ++
drivers/net/mlx5/mlx5_flow_dv.c | 10 +-
6 files changed, 493 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6359bc9..db6f5c7 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1039,6 +1039,8 @@ struct mlx5_flow_id_pool *
priv->txqs = NULL;
}
mlx5_proc_priv_uninit(dev);
+ if (priv->mreg_cp_tbl)
+ mlx5_hlist_destroy(priv->mreg_cp_tbl);
mlx5_mprq_free_mp(dev);
mlx5_free_shared_dr(priv);
if (priv->rss_conf.rss_key != NULL)
@@ -2458,9 +2460,24 @@ struct mlx5_flow_id_pool *
goto error;
}
}
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(eth_dev) &&
+ priv->sh->dv_regc0_mask) {
+ MKSTR(hlist_name, "%s", MLX5_FLOW_MREG_HNAME);
+ priv->mreg_cp_tbl = mlx5_hlist_create(name,
+ MLX5_FLOW_MREG_HTABLE_SZ,
+ NULL);
+ if (!priv->mreg_cp_tbl) {
+ err = ENOMEM;
+ goto error;
+ }
+ }
return eth_dev;
error:
if (priv) {
+ if (priv->mreg_cp_tbl)
+ mlx5_hlist_destroy(priv->mreg_cp_tbl);
if (priv->sh)
mlx5_free_shared_dr(priv);
if (priv->nl_socket_route >= 0)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9c1a88a..619590b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -567,8 +567,9 @@ struct mlx5_flow_tbl_resource {
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
/* Reserve the last two tables for metadata register copy. */
#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
-#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
- (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP (MLX5_MAX_TABLES - 2)
+/* Tables for metering splits should be added here. */
+#define MLX5_MAX_TABLES_EXTERNAL (MLX5_MAX_TABLES - 3)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -734,6 +735,8 @@ struct mlx5_priv {
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
struct mlx5_flow_id_pool *qrss_id_pool;
+ struct mlx5_hlist *mreg_cp_tbl;
+ /* Hash table of Rx metadata register copy table. */
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index a77c430..0ef532f 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -145,6 +145,10 @@
#define MLX5_XMETA_MODE_META16 1
#define MLX5_XMETA_MODE_META32 2
+/* Size of the simple hash table for metadata register table. */
+#define MLX5_FLOW_MREG_HTABLE_SZ 4096
+#define MLX5_FLOW_MREG_HNAME "MARK_COPY_TABLE"
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2f6ace0..9ef7f7d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -671,7 +671,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ /*
+ * To support metadata register copy on Tx loopback,
+ * this must be always enabled (metadata may arive
+ * from other port - not from local flows only.
+ */
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->rxq.mark = 1;
rxq_ctrl->flow_mark_n++;
}
@@ -735,7 +745,12 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->flow_mark_n--;
rxq_ctrl->rxq.mark = !!rxq_ctrl->flow_mark_n;
}
@@ -2731,6 +2746,398 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Declare flow create/destroy prototype in advance. */
+static struct rte_flow *
+flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error);
+
+static void
+flow_list_destroy(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ struct rte_flow *flow);
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * As mark_id is unique, if there's already a registered flow for the mark_id,
+ * return by increasing the reference counter of the resource. Otherwise, create
+ * the resource (mcp_res) and flow.
+ *
+ * Flow looks like,
+ * - If ingress port is ANY and reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For default flow (zero mark_id), flow is like,
+ * - If ingress port is ANY,
+ * reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param mark_id
+ * ID of MARK action, zero means default flow for META.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * Associated resource on success, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_flow_mreg_copy_resource *
+flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .ingress = 1,
+ };
+ struct mlx5_rte_flow_item_tag tag_spec = {
+ .data = mark_id,
+ };
+ struct rte_flow_item items[] = {
+ [1] = { .type = RTE_FLOW_ITEM_TYPE_END, },
+ };
+ struct rte_flow_action_mark ftag = {
+ .id = mark_id,
+ };
+ struct mlx5_flow_action_copy_mreg cp_mreg = {
+ .dst = REG_B,
+ .src = 0,
+ };
+ struct rte_flow_action_jump jump = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ };
+ struct rte_flow_action actions[] = {
+ [3] = { .type = RTE_FLOW_ACTION_TYPE_END, },
+ };
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ int ret;
+
+ /* Fill the register fileds in the flow. */
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return NULL;
+ tag_spec.id = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return NULL;
+ cp_mreg.src = ret;
+ /* Check if already registered. */
+ assert(priv->mreg_cp_tbl);
+ mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id);
+ if (mcp_res) {
+ /* For non-default rule. */
+ if (mark_id)
+ mcp_res->refcnt++;
+ assert(mark_id || mcp_res->refcnt == 1);
+ return mcp_res;
+ }
+ /* Provide the full width of FLAG specific value. */
+ if (mark_id == (priv->sh->dv_regc0_mask & MLX5_FLOW_MARK_DEFAULT))
+ tag_spec.data = MLX5_FLOW_MARK_DEFAULT;
+ /* Build a new flow. */
+ if (mark_id) {
+ items[0] = (struct rte_flow_item){
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &tag_spec,
+ };
+ items[1] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_MARK,
+ .conf = &ftag,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[3] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ } else {
+ /* Default rule, wildcard match. */
+ attr.priority = MLX5_FLOW_PRIO_RSVD;
+ items[0] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ }
+ /* Build a new entry. */
+ mcp_res = rte_zmalloc(__func__, sizeof(*mcp_res), 0);
+ if (!mcp_res) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ /*
+ * The copy Flows are not included in any list. There
+ * ones are referenced from other Flows and can not
+ * be applied, removed, deleted in ardbitrary order
+ * by list traversing.
+ */
+ mcp_res->flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, error);
+ if (!mcp_res->flow)
+ goto error;
+ mcp_res->refcnt++;
+ mcp_res->hlist_ent.key = mark_id;
+ ret = mlx5_hlist_insert(priv->mreg_cp_tbl,
+ &mcp_res->hlist_ent);
+ assert(!ret);
+ if (ret)
+ goto error;
+ return mcp_res;
+error:
+ if (mcp_res->flow)
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ rte_free(mcp_res);
+ return NULL;
+}
+
+/**
+ * Release flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_del_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ if (!mcp_res || !priv->mreg_cp_tbl)
+ return;
+ if (flow->copy_applied) {
+ assert(mcp_res->appcnt);
+ flow->copy_applied = 0;
+ --mcp_res->appcnt;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+ }
+ /*
+ * We do not check availability of metadata registers here,
+ * because copy resources are allocated in this case.
+ */
+ if (--mcp_res->refcnt)
+ return;
+ assert(mcp_res->flow);
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent);
+ rte_free(mcp_res);
+ flow->mreg_copy = NULL;
+}
+
+/**
+ * Start flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_start_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+ int ret;
+
+ if (!mcp_res || flow->copy_applied)
+ return 0;
+ if (!mcp_res->appcnt) {
+ ret = flow_drv_apply(dev, mcp_res->flow, NULL);
+ if (ret)
+ return ret;
+ }
+ ++mcp_res->appcnt;
+ flow->copy_applied = 1;
+ return 0;
+}
+
+/**
+ * Stop flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_stop_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+
+ if (!mcp_res || !flow->copy_applied)
+ return;
+ assert(mcp_res->appcnt);
+ --mcp_res->appcnt;
+ flow->copy_applied = 0;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+}
+
+/**
+ * Remove the default copy action from RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+flow_mreg_del_default_copy_action(struct rte_eth_dev *dev)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ /* Check if default flow is registered. */
+ if (!priv->mreg_cp_tbl)
+ return;
+ mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, 0ULL);
+ if (!mcp_res)
+ return;
+ assert(mcp_res->flow);
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent);
+ rte_free(mcp_res);
+}
+
+/**
+ * Add the default copy action in in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 for success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_add_default_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!priv->config.dv_flow_en ||
+ priv->config.dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ mcp_res = flow_mreg_add_copy_action(dev, 0, error);
+ if (!mcp_res)
+ return -rte_errno;
+ return 0;
+}
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * All the flow having Q/RSS action should be split by
+ * flow_mreg_split_qrss_prep() to pass by RX_CP_TBL. A flow in the RX_CP_TBL
+ * performs the following,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * As CQE's flow_tag is not a register, it can't be simply copied from reg_c[1]
+ * but there should be a flow per each MARK ID set by MARK action.
+ *
+ * For the aforementioned reason, if there's a MARK action in flow's action
+ * list, a corresponding flow should be added to the RX_CP_TBL in order to copy
+ * the MARK ID to CQE's flow_tag like,
+ * - If reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For SET_META action which stores value in reg_c[0], as the destination is
+ * also a flow metadata register (reg_b), adding a default flow is enough. Zero
+ * MARK ID means the default flow. The default flow looks like,
+ * - For all flow, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Pointer to flow structure.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_update_copy_table(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *actions,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ const struct rte_flow_action_mark *mark;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ /* Find MARK action. */
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mcp_res = flow_mreg_add_copy_action
+ (dev, MLX5_FLOW_MARK_DEFAULT, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ mark = (const struct rte_flow_action_mark *)
+ actions->conf;
+ mcp_res =
+ flow_mreg_add_copy_action(dev, mark->id, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ default:
+ break;
+ }
+ }
+ return 0;
+}
+
#define MLX5_MAX_SPLIT_ACTIONS 24
#define MLX5_MAX_SPLIT_ITEMS 24
@@ -3454,6 +3861,22 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
+ /*
+ * Update the metadata register copy table. If extensive
+ * metadata feature is enabled and registers are supported
+ * we might create the extra rte_flow for each unique
+ * MARK/FLAG action ID.
+ *
+ * The table is updated for ingress Flows only, because
+ * the egress Flows belong to the different device and
+ * copy table should be updated in peer NIC Rx domain.
+ */
+ if (attr->ingress &&
+ (external || attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) {
+ ret = flow_mreg_update_copy_table(dev, flow, actions, error);
+ if (ret)
+ goto error;
+ }
if (dev->data->dev_started) {
ret = flow_drv_apply(dev, flow, error);
if (ret < 0)
@@ -3469,6 +3892,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
hairpin_id);
return NULL;
error:
+ assert(flow);
+ flow_mreg_del_copy_action(dev, flow);
ret = rte_errno; /* Save rte_errno before cleanup. */
if (flow->hairpin_flow_id)
mlx5_flow_id_release(priv->sh->flow_id_pool,
@@ -3577,6 +4002,7 @@ struct rte_flow *
flow_drv_destroy(dev, flow);
if (list)
TAILQ_REMOVE(list, flow, next);
+ flow_mreg_del_copy_action(dev, flow);
rte_free(flow->fdir);
rte_free(flow);
}
@@ -3613,8 +4039,11 @@ struct rte_flow *
{
struct rte_flow *flow;
- TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next)
+ TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
flow_drv_remove(dev, flow);
+ flow_mreg_stop_copy_action(dev, flow);
+ }
+ flow_mreg_del_default_copy_action(dev);
flow_rxq_flags_clear(dev);
}
@@ -3636,7 +4065,15 @@ struct rte_flow *
struct rte_flow_error error;
int ret = 0;
+ /* Make sure default copy action (reg_c[0] -> reg_b) is created. */
+ ret = flow_mreg_add_default_copy_action(dev, &error);
+ if (ret < 0)
+ return -rte_errno;
+ /* Apply Flows created by application. */
TAILQ_FOREACH(flow, list, next) {
+ ret = flow_mreg_start_copy_action(dev, flow);
+ if (ret < 0)
+ goto error;
ret = flow_drv_apply(dev, flow, &error);
if (ret < 0)
goto error;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c71938b..560b2b1 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -38,6 +38,7 @@ enum mlx5_rte_flow_item_type {
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_MARK,
MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
@@ -417,6 +418,21 @@ struct mlx5_flow_dv_push_vlan_action_resource {
rte_be32_t vlan_tag; /**< VLAN tag value. */
};
+/* Metadata register copy table entry. */
+struct mlx5_flow_mreg_copy_resource {
+ /*
+ * Hash list entry for copy table.
+ * - Key is 32/64-bit MARK action ID.
+ * - MUST be the first entry.
+ */
+ struct mlx5_hlist_entry hlist_ent;
+ LIST_ENTRY(mlx5_flow_mreg_copy_resource) next;
+ /* List entry for device flows. */
+ uint32_t refcnt; /* Reference counter. */
+ uint32_t appcnt; /* Apply/Remove counter. */
+ struct rte_flow *flow; /* Built flow for copy. */
+};
+
/*
* Max number of actions per DV flow.
* See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED
@@ -510,10 +526,13 @@ struct rte_flow {
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
+ struct mlx5_flow_mreg_copy_resource *mreg_copy;
+ /**< pointer to metadata register copy table resource. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
+ uint32_t copy_applied:1; /**< The MARK copy Flow os applied. */
};
typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 60ebbca..f06227c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4086,8 +4086,11 @@ struct field_modify_info modify_tcp[] = {
NULL,
"groups are not supported");
#else
- uint32_t max_group = attributes->transfer ? MLX5_MAX_TABLES_FDB :
- MLX5_MAX_TABLES;
+ uint32_t max_group = attributes->transfer ?
+ MLX5_MAX_TABLES_FDB :
+ external ?
+ MLX5_MAX_TABLES_EXTERNAL :
+ MLX5_MAX_TABLES;
uint32_t table;
int ret;
@@ -4694,6 +4697,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
@@ -6530,6 +6534,8 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
break;
}
+ /* Fall-through */
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
/* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (21 preceding siblings ...)
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
` (19 more replies)
22 siblings, 20 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The modern networks operate on the base of the packet switching
approach, and in-network environment data are transmitted as the
packets. Within the host besides the data, actually transmitted
on the wire as packets, there might some out-of-band data helping
to process packets. These data are named as metadata, exist on
a per-packet basis and are attached to each packet as some extra
dedicated storage (in meaning it besides the packet data itself).
In the DPDK network data are represented as mbuf structure chains
and go along the application/DPDK datapath. From the other side,
DPDK provides Flow API to control the flow engine. Being precise,
there are two kinds of metadata in the DPDK, the one is purely
software metadata (as fields of mbuf - flags, packet types, data
length, etc.), and the other is metadata within flow engine.
In this scope, we cover the second type (flow engine metadata) only.
The flow engine metadata is some extra data, supported on the
per-packet basis and usually handled by hardware inside flow
engine.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK
action stores some specified value in the metadata storage, and,
on the packet receiving PMD puts the flag and value to the mbuf
and applications can see the packet was threated inside flow engine
according to the appropriate RTE flow(s). MARK and FLAG are like
some kind of gateway to transfer some per-packet information from
the flow engine to the application via receiving datapath. Also,
there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It
allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other
flows.
From the datapath point of view, the MARK and FLAG are related
to the receiving side only. It would useful to have the same gateway
on the transmitting side and there was the feature of type
RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill
the field in mbuf and this value will be transferred to some field
in the packet metadata inside the flow engine.
It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata
inside the flow engine and gateways to exchange these values on
a per-packet basis via datapath.
As we can see, the MARK and META means are not symmetric, there
is absent action which would allow us to set META value on the
transmitting path. So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META is proposed.
The next, applications raise the new requirements for packet
metadata. The flow engines are getting more complex, internal
switches are introduced, multiple ports might be supported within
the same flow engine namespace. From the DPDK points of view, it
means the packets might be sent on one eth_dev port and received
on the other one, and the packet path inside the flow engine
entirely belongs to the same hardware device. The simplest example
is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to
transfer some extra data from one port to another one, besides
the packet data itself. And applications would like to use this
opportunity.
Improving the metadata definitions it is proposed to:
- suppose MARK and META metadata fields not shared, dedicated
- extend applying area for MARK and META items/actions for all
flow engine domains - transmitting and receiving
- allow MARK and META metadata to be preserved while crossing
the flow domains (from transmit origin through flow database
inside (E-)switch to receiving side domain), in simple words,
to allow metadata to convey the packet thought entire flow
engine space.
Another new proposed feature is transient per-packet storage
inside the flow engine. It might have a lot of use cases.
For example, if there is VXLAN tunneled traffic and some flow
performs VXLAN decapsulation and wishes to save information
regarding the dropped header it could use this temporary
transient storage. The tools to maintain this storage are
traditional (for DPDK rte_flow API):
- RTE_FLOW_ACTION_TYPE_SET_TAG - to set value
- RTE_FLOW_ACTION_TYPE_SET_ITEM - to match on
There are primary properties of the proposed storage:
- the storage is presented as an array of 32-bit opaque values
- the size of array (or even bitmap of available indices) is
vendor specific and is subject to run-time trial
- it is transient, it means it exists only inside flow engine,
no gateways for interacting with datapath, applications have
way neither to specify these data on transmitting nor to get
these data on receiving
This patchset implements the abovementioned extensive metadata
feature in the mlx5 PMD.
The patchset must be applied after hashed list patch:
[1] http://patches.dpdk.org/patch/62539/
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
v3: - moved missed part from isolated debug commit
- rebased
v2: - http://patches.dpdk.org/cover/62579/
- fix: metadata endianess
- fix: infinite loop in header modify update routine
- fix: reg_c_3 is reserved for split shared tag
- fix: vport mask and value endianess
- hash list implementation removed
- rebased
v1: http://patches.dpdk.org/cover/62419/
Viacheslav Ovsiienko (19):
net/mlx5: convert internal tag endianness
net/mlx5: update modify header action translator
net/mlx5: add metadata register copy
net/mlx5: refactor flow structure
net/mlx5: update flow functions
net/mlx5: update meta register matcher set
net/mlx5: rename structure and function
net/mlx5: check metadata registers availability
net/mlx5: add devarg for extensive metadata support
net/mlx5: adjust shared register according to mask
net/mlx5: check the maximal modify actions number
net/mlx5: update metadata register id query
net/mlx5: add flow tag support
net/mlx5: extend flow mark support
net/mlx5: extend flow meta data support
net/mlx5: add meta data support to Rx datapath
net/mlx5: introduce flow splitters chain
net/mlx5: split Rx flows to provide metadata copy
net/mlx5: add metadata register copy table
doc/guides/nics/mlx5.rst | 49 +
drivers/net/mlx5/mlx5.c | 150 ++-
drivers/net/mlx5/mlx5.h | 19 +-
drivers/net/mlx5/mlx5_defs.h | 8 +
drivers/net/mlx5/mlx5_ethdev.c | 8 +-
drivers/net/mlx5/mlx5_flow.c | 1201 ++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 108 ++-
drivers/net/mlx5/mlx5_flow_dv.c | 1566 ++++++++++++++++++++++++------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 +-
drivers/net/mlx5/mlx5_prm.h | 45 +-
drivers/net/mlx5/mlx5_rxtx.c | 5 +
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +-
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +-
14 files changed, 2866 insertions(+), 423 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 01/19] net/mlx5: convert internal tag endianness
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
` (18 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
Public API RTE_FLOW_ACTION_TYPE_TAG and RTE_FLOW_ITEM_TYPE_TAG
present data in host-endian format, as all metadata related
entities. The internal mlx5 tag related action and item should
use the same endianness to be conformed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 6 +++---
drivers/net/mlx5/mlx5_flow.h | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b4b08f4..5408797 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2707,7 +2707,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
actions_rx++;
set_tag = (void *)actions_rx;
set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
- set_tag->data = rte_cpu_to_be_32(*flow_id);
+ set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action));
@@ -2715,8 +2715,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item = pattern_tx;
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
- tag_item->data = rte_cpu_to_be_32(*flow_id);
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, &error);
+ tag_item->data = *flow_id;
+ tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7559810..8cc6c47 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -56,13 +56,13 @@ enum mlx5_rte_flow_action_type {
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
uint16_t id;
- rte_be32_t data;
+ uint32_t data;
};
/* Matches on source queue. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 02/19] net/mlx5: update modify header action translator
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
` (17 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
When composing device command for modify header action, provided mask
should be taken more accurate into account thus length and offset
in action should be set accordingly at precise bit-wise boundaries.
For the future use, metadata register copy action is also added.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 150 ++++++++++++++++++++++++++++++----------
drivers/net/mlx5/mlx5_prm.h | 18 +++--
2 files changed, 128 insertions(+), 40 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 42c265f..6a3850a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -240,12 +240,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Fetch 1, 2, 3 or 4 byte field from the byte array
+ * and return as unsigned integer in host-endian format.
+ *
+ * @param[in] data
+ * Pointer to data array.
+ * @param[in] size
+ * Size of field to extract.
+ *
+ * @return
+ * converted field in host endian format.
+ */
+static inline uint32_t
+flow_dv_fetch_field(const uint8_t *data, uint32_t size)
+{
+ uint32_t ret;
+
+ switch (size) {
+ case 1:
+ ret = *data;
+ break;
+ case 2:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ break;
+ case 3:
+ ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data);
+ ret = (ret << 8) | *(data + sizeof(uint16_t));
+ break;
+ case 4:
+ ret = rte_be_to_cpu_32(*(const unaligned_uint32_t *)data);
+ break;
+ default:
+ assert(false);
+ ret = 0;
+ break;
+ }
+ return ret;
+}
+
+/**
* Convert modify-header action to DV specification.
*
+ * Data length of each action is determined by provided field description
+ * and the item mask. Data bit offset and width of each action is determined
+ * by provided item mask.
+ *
* @param[in] item
* Pointer to item specification.
* @param[in] field
* Pointer to field modification information.
+ * For MLX5_MODIFICATION_TYPE_SET specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_ADD specifies destination field.
+ * For MLX5_MODIFICATION_TYPE_COPY specifies source field.
+ * @param[in] dcopy
+ * Destination field info for MLX5_MODIFICATION_TYPE_COPY in @type.
+ * Negative offset value sets the same offset as source offset.
+ * size field is ignored, value is taken from source field.
* @param[in,out] resource
* Pointer to the modify-header resource.
* @param[in] type
@@ -259,38 +309,68 @@ struct field_modify_info modify_tcp[] = {
static int
flow_dv_convert_modify_action(struct rte_flow_item *item,
struct field_modify_info *field,
+ struct field_modify_info *dcopy,
struct mlx5_flow_dv_modify_hdr_resource *resource,
- uint32_t type,
- struct rte_flow_error *error)
+ uint32_t type, struct rte_flow_error *error)
{
uint32_t i = resource->actions_num;
struct mlx5_modification_cmd *actions = resource->actions;
- const uint8_t *spec = item->spec;
- const uint8_t *mask = item->mask;
- uint32_t set;
-
- while (field->size) {
- set = 0;
- /* Generate modify command for each mask segment. */
- memcpy(&set, &mask[field->offset], field->size);
- if (set) {
- if (i >= MLX5_MODIFY_NUM)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "too many items to modify");
- actions[i].action_type = type;
- actions[i].field = field->id;
- actions[i].length = field->size ==
- 4 ? 0 : field->size * 8;
- rte_memcpy(&actions[i].data[4 - field->size],
- &spec[field->offset], field->size);
- actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- ++i;
+
+ /*
+ * The item and mask are provided in big-endian format.
+ * The fields should be presented as in big-endian format either.
+ * Mask must be always present, it defines the actual field width.
+ */
+ assert(item->mask);
+ assert(field->size);
+ do {
+ unsigned int size_b;
+ unsigned int off_b;
+ uint32_t mask;
+ uint32_t data;
+
+ if (i >= MLX5_MODIFY_NUM)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many items to modify");
+ /* Fetch variable byte size mask from the array. */
+ mask = flow_dv_fetch_field((const uint8_t *)item->mask +
+ field->offset, field->size);
+ if (!mask) {
+ ++field;
+ continue;
}
- if (resource->actions_num != i)
- resource->actions_num = i;
- field++;
- }
+ /* Deduce actual data width in bits from mask value. */
+ off_b = rte_bsf32(mask);
+ size_b = sizeof(uint32_t) * CHAR_BIT -
+ off_b - __builtin_clz(mask);
+ assert(size_b);
+ size_b = size_b == sizeof(uint32_t) * CHAR_BIT ? 0 : size_b;
+ actions[i].action_type = type;
+ actions[i].field = field->id;
+ actions[i].offset = off_b;
+ actions[i].length = size_b;
+ /* Convert entire record to expected big-endian format. */
+ actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
+ if (type == MLX5_MODIFICATION_TYPE_COPY) {
+ assert(dcopy);
+ actions[i].dst_field = dcopy->id;
+ actions[i].dst_offset =
+ (int)dcopy->offset < 0 ? off_b : dcopy->offset;
+ /* Convert entire record to big-endian format. */
+ actions[i].data1 = rte_cpu_to_be_32(actions[i].data1);
+ } else {
+ assert(item->spec);
+ data = flow_dv_fetch_field((const uint8_t *)item->spec +
+ field->offset, field->size);
+ /* Shift out the trailing masked bits from data. */
+ data = (data & mask) >> off_b;
+ actions[i].data1 = rte_cpu_to_be_32(data);
+ }
+ ++i;
+ ++field;
+ } while (field->size);
+ resource->actions_num = i;
if (!resource->actions_num)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -334,7 +414,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv4;
item.mask = &ipv4_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv4, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv4, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -380,7 +460,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = &ipv6;
item.mask = &ipv6_mask;
- return flow_dv_convert_modify_action(&item, modify_ipv6, resource,
+ return flow_dv_convert_modify_action(&item, modify_ipv6, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -426,7 +506,7 @@ struct field_modify_info modify_tcp[] = {
}
item.spec = ð
item.mask = ð_mask;
- return flow_dv_convert_modify_action(&item, modify_eth, resource,
+ return flow_dv_convert_modify_action(&item, modify_eth, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -540,7 +620,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &tcp_mask;
field = modify_tcp;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -600,7 +680,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_SET, error);
}
@@ -657,7 +737,7 @@ struct field_modify_info modify_tcp[] = {
item.mask = &ipv6_mask;
field = modify_ipv6;
}
- return flow_dv_convert_modify_action(&item, field, resource,
+ return flow_dv_convert_modify_action(&item, field, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -702,7 +782,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
@@ -747,7 +827,7 @@ struct field_modify_info modify_tcp[] = {
item.type = RTE_FLOW_ITEM_TYPE_TCP;
item.spec = &tcp;
item.mask = &tcp_mask;
- return flow_dv_convert_modify_action(&item, modify_tcp, resource,
+ return flow_dv_convert_modify_action(&item, modify_tcp, NULL, resource,
MLX5_MODIFICATION_TYPE_ADD, error);
}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 96b9166..b9e53f5 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -383,11 +383,12 @@ struct mlx5_cqe {
/* CQE format value. */
#define MLX5_COMPRESSED 0x3
-/* Write a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_SET 1
-
-/* Add a specific data value to a field. */
-#define MLX5_MODIFICATION_TYPE_ADD 2
+/* Action type of header modification. */
+enum {
+ MLX5_MODIFICATION_TYPE_SET = 0x1,
+ MLX5_MODIFICATION_TYPE_ADD = 0x2,
+ MLX5_MODIFICATION_TYPE_COPY = 0x3,
+};
/* The field of packet to be modified. */
enum mlx5_modification_field {
@@ -470,6 +471,13 @@ struct mlx5_modification_cmd {
union {
uint32_t data1;
uint8_t data[4];
+ struct {
+ unsigned int rsvd2:8;
+ unsigned int dst_offset:5;
+ unsigned int rsvd3:3;
+ unsigned int dst_field:12;
+ unsigned int rsvd4:4;
+ };
};
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 03/19] net/mlx5: add metadata register copy
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
` (16 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add flow metadata register copy action which is supported through modify
header command. As it is an internal action, not exposed to users, item
type (MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG) is negative value. This can be
used when creating PMD internal subflows.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 13 +++++++----
drivers/net/mlx5/mlx5_flow_dv.c | 50 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8cc6c47..170192d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -47,24 +47,30 @@ enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE,
};
-/* Private rte flow actions. */
+/* Private (internal) rte flow actions. */
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
/* Matches on selected register. */
struct mlx5_rte_flow_item_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
/* Modify selected register. */
struct mlx5_rte_flow_action_set_tag {
- uint16_t id;
+ enum modify_reg id;
uint32_t data;
};
+struct mlx5_flow_action_copy_mreg {
+ enum modify_reg dst;
+ enum modify_reg src;
+};
+
/* Matches on source queue. */
struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
@@ -227,7 +233,6 @@ struct mlx5_rte_flow_item_tx_queue {
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
-
#ifndef IPPROTO_MPLS
#define IPPROTO_MPLS 137
#endif
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6a3850a..baa34a2 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -863,7 +863,7 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
- const struct mlx5_rte_flow_action_set_tag *conf = (action->conf);
+ const struct mlx5_rte_flow_action_set_tag *conf = action->conf;
struct mlx5_modification_cmd *actions = resource->actions;
uint32_t i = resource->actions_num;
@@ -885,6 +885,47 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert internal COPY_REG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] res
+ * Pointer to the modify-header resource.
+ * @param[in] action
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+ struct mlx5_flow_dv_modify_hdr_resource *res,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_action_copy_mreg *conf = action->conf;
+ uint32_t mask = RTE_BE32(UINT32_MAX);
+ struct rte_flow_item item = {
+ .spec = NULL,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_src[] = {
+ {4, 0, reg_to_field[conf->src]},
+ {0, 0, 0},
+ };
+ struct field_modify_info reg_dst = {
+ .offset = (uint32_t)-1, /* Same as src. */
+ .id = reg_to_field[conf->dst],
+ };
+ return flow_dv_convert_modify_action(&item,
+ reg_src, ®_dst, res,
+ MLX5_MODIFICATION_TYPE_COPY,
+ error);
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -3951,6 +3992,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -5947,6 +5989,12 @@ struct field_modify_info modify_tcp[] = {
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
+ case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
+ if (flow_dv_convert_action_copy_mreg(dev, &res,
+ actions, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 04/19] net/mlx5: refactor flow structure
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (2 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
` (15 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Some rte_flow fields which are local to subflows have been moved to
mlx5_flow structure. RSS attributes are grouped by mlx5_flow_rss structure.
tag_resource is moved to mlx5_flow_dv structure.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 18 +++++---
drivers/net/mlx5/mlx5_flow.h | 25 ++++++-----
drivers/net/mlx5/mlx5_flow_dv.c | 89 ++++++++++++++++++++------------------
drivers/net/mlx5/mlx5_flow_verbs.c | 55 ++++++++++++-----------
4 files changed, 105 insertions(+), 82 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 5408797..d1661f2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -612,7 +612,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
unsigned int i;
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -676,7 +676,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
assert(dev->data->dev_started);
for (i = 0; i != flow->rss.queue_num; ++i) {
- int idx = (*flow->queue)[i];
+ int idx = (*flow->rss.queue)[i];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
@@ -2815,13 +2815,20 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
goto error_before_flow;
}
flow->drv_type = flow_get_drv_type(dev, attr);
- flow->ingress = attr->ingress;
- flow->transfer = attr->transfer;
if (hairpin_id != 0)
flow->hairpin_flow_id = hairpin_id;
assert(flow->drv_type > MLX5_FLOW_TYPE_MIN &&
flow->drv_type < MLX5_FLOW_TYPE_MAX);
- flow->queue = (void *)(flow + 1);
+ flow->rss.queue = (void *)(flow + 1);
+ if (rss) {
+ /*
+ * The following information is required by
+ * mlx5_flow_hashfields_adjust() in advance.
+ */
+ flow->rss.level = rss->level;
+ /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
+ flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
+ }
LIST_INIT(&flow->dev_flows);
if (rss && rss->types) {
unsigned int graph_root;
@@ -2861,6 +2868,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (!dev_flow)
goto error;
dev_flow->flow = flow;
+ dev_flow->external = 0;
LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
ret = flow_drv_translate(dev, dev_flow, &attr_tx,
items_tx.items,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 170192d..b9a9507 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -417,7 +417,6 @@ struct mlx5_flow_dv_push_vlan_action_resource {
/* DV flows structure. */
struct mlx5_flow_dv {
- uint64_t hash_fields; /**< Fields that participate in the hash. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queues. */
/* Flow DV api: */
struct mlx5_flow_dv_matcher *matcher; /**< Cache to matcher. */
@@ -436,6 +435,8 @@ struct mlx5_flow_dv {
/**< Structure for VF VLAN workaround. */
struct mlx5_flow_dv_push_vlan_action_resource *push_vlan_res;
/**< Pointer to push VLAN action resource in cache. */
+ struct mlx5_flow_dv_tag_resource *tag_resource;
+ /**< pointer to the tag action. */
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
void *actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS];
/**< Action list. */
@@ -460,11 +461,18 @@ struct mlx5_flow_verbs {
};
struct ibv_flow *flow; /**< Verbs flow pointer. */
struct mlx5_hrxq *hrxq; /**< Hash Rx queue object. */
- uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
struct mlx5_vf_vlan vf_vlan;
/**< Structure for VF VLAN workaround. */
};
+struct mlx5_flow_rss {
+ uint32_t level;
+ uint32_t queue_num; /**< Number of entries in @p queue. */
+ uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+ uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
+ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+};
+
/** Device flow structure. */
struct mlx5_flow {
LIST_ENTRY(mlx5_flow) next;
@@ -473,6 +481,10 @@ struct mlx5_flow {
/**< Bit-fields of present layers, see MLX5_FLOW_LAYER_*. */
uint64_t actions;
/**< Bit-fields of detected actions, see MLX5_FLOW_ACTION_*. */
+ uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */
+ uint8_t ingress; /**< 1 if the flow is ingress. */
+ uint32_t group; /**< The group index. */
+ uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
union {
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
struct mlx5_flow_dv dv;
@@ -486,18 +498,11 @@ struct mlx5_flow {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
+ struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
- struct mlx5_flow_dv_tag_resource *tag_resource;
- /**< pointer to the tag action. */
- struct rte_flow_action_rss rss;/**< RSS context. */
- uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
- uint16_t (*queue)[]; /**< Destination queues to redirect traffic to. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
- uint8_t ingress; /**< 1 if the flow is ingress. */
- uint32_t group; /**< The group index. */
- uint8_t transfer; /**< 1 if the flow is E-Switch flow. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index baa34a2..b7e8e0a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1585,10 +1585,9 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
struct mlx5_flow_dv_encap_decap_resource *cache_resource;
- struct rte_flow *flow = dev_flow->flow;
struct mlx5dv_dr_domain *domain;
- resource->flags = flow->group ? 0 : 1;
+ resource->flags = dev_flow->group ? 0 : 1;
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
domain = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX)
@@ -2747,7 +2746,7 @@ struct field_modify_info modify_tcp[] = {
else
ns = sh->rx_domain;
resource->flags =
- dev_flow->flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
+ dev_flow->group ? 0 : MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL;
/* Lookup a matching resource from cache. */
LIST_FOREACH(cache_resource, &sh->modify_cmds, next) {
if (resource->ft_type == cache_resource->ft_type &&
@@ -4068,18 +4067,20 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_action actions[] __rte_unused,
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow);
+ struct mlx5_flow *dev_flow;
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
- return flow;
+ dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
#ifndef NDEBUG
@@ -5460,7 +5461,7 @@ struct field_modify_info modify_tcp[] = {
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
rte_atomic32_inc(&cache_resource->refcnt);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
return 0;
}
}
@@ -5482,7 +5483,7 @@ struct field_modify_info modify_tcp[] = {
rte_atomic32_init(&cache_resource->refcnt);
rte_atomic32_inc(&cache_resource->refcnt);
LIST_INSERT_HEAD(&sh->tags, cache_resource, next);
- dev_flow->flow->tag_resource = cache_resource;
+ dev_flow->dv.tag_resource = cache_resource;
DRV_LOG(DEBUG, "new tag resource %p: refcnt %d++",
(void *)cache_resource,
rte_atomic32_read(&cache_resource->refcnt));
@@ -5662,7 +5663,7 @@ struct field_modify_info modify_tcp[] = {
&table, error);
if (ret)
return ret;
- flow->group = table;
+ dev_flow->group = table;
if (attr->transfer)
res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
@@ -5699,47 +5700,50 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_FLAG:
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
- if (!flow->tag_resource)
+ if (!dev_flow->dv.tag_resource)
if (flow_dv_tag_resource_register
(dev, &tag_resource, dev_flow, error))
return errno;
dev_flow->dv.actions[actions_n++] =
- flow->tag_resource->action;
+ dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
+ assert(flow->rss.queue);
queue = actions->conf;
flow->rss.queue_num = 1;
- (*flow->queue)[0] = queue->index;
+ (*flow->rss.queue)[0] = queue->index;
action_flags |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
+ assert(flow->rss.queue);
rss = actions->conf;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance
+ * when expanding items for RSS.
+ */
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
@@ -5750,7 +5754,7 @@ struct field_modify_info modify_tcp[] = {
flow->counter = flow_dv_counter_alloc(dev,
count->shared,
count->id,
- flow->group);
+ dev_flow->group);
if (flow->counter == NULL)
goto cnt_err;
dev_flow->dv.actions[actions_n++] =
@@ -6048,9 +6052,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv4(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -6075,9 +6080,10 @@ struct field_modify_info modify_tcp[] = {
mlx5_flow_tunnel_ip_check(items, next_protocol,
&item_flags, &tunnel);
flow_dv_translate_item_ipv6(match_mask, match_value,
- items, tunnel, flow->group);
+ items, tunnel,
+ dev_flow->group);
matcher.priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -6102,7 +6108,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_tcp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
IBV_RX_HASH_SRC_PORT_TCP |
@@ -6114,7 +6120,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_udp(match_mask, match_value,
items, tunnel);
matcher.priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
IBV_RX_HASH_SRC_PORT_UDP |
@@ -6210,7 +6216,7 @@ struct field_modify_info modify_tcp[] = {
matcher.priority = mlx5_flow_adjust_priority(dev, priority,
matcher.priority);
matcher.egress = attr->egress;
- matcher.group = flow->group;
+ matcher.group = dev_flow->group;
matcher.transfer = attr->transfer;
if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
return -rte_errno;
@@ -6244,7 +6250,7 @@ struct field_modify_info modify_tcp[] = {
dv = &dev_flow->dv;
n = dv->actions_n;
if (dev_flow->actions & MLX5_FLOW_ACTION_DROP) {
- if (flow->transfer) {
+ if (dev_flow->transfer) {
dv->actions[n++] = priv->sh->esw_drop_action;
} else {
dv->hrxq = mlx5_hrxq_drop_new(dev);
@@ -6262,15 +6268,18 @@ struct field_modify_info modify_tcp[] = {
(MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS)) {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq) {
hrxq = mlx5_hrxq_new
- (dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
- dv->hash_fields, (*flow->queue),
+ (dev, flow->rss.key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
@@ -6580,10 +6589,6 @@ struct field_modify_info modify_tcp[] = {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
}
- if (flow->tag_resource) {
- flow_dv_tag_release(dev, flow->tag_resource);
- flow->tag_resource = NULL;
- }
while (!LIST_EMPTY(&flow->dev_flows)) {
dev_flow = LIST_FIRST(&flow->dev_flows);
LIST_REMOVE(dev_flow, next);
@@ -6599,6 +6604,8 @@ struct field_modify_info modify_tcp[] = {
flow_dv_port_id_action_resource_release(dev_flow);
if (dev_flow->dv.push_vlan_res)
flow_dv_push_vlan_action_resource_release(dev_flow);
+ if (dev_flow->dv.tag_resource)
+ flow_dv_tag_release(dev, dev_flow->dv.tag_resource);
rte_free(dev_flow);
}
}
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fd27f6c..3ab73c2 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -864,8 +864,8 @@
const struct rte_flow_action_queue *queue = action->conf;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- (*flow->queue)[0] = queue->index;
+ if (flow->rss.queue)
+ (*flow->rss.queue)[0] = queue->index;
flow->rss.queue_num = 1;
}
@@ -889,16 +889,17 @@
const uint8_t *rss_key;
struct rte_flow *flow = dev_flow->flow;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
+ if (flow->rss.queue)
+ memcpy((*flow->rss.queue), rss->queue,
rss->queue_num * sizeof(uint16_t));
flow->rss.queue_num = rss->queue_num;
/* NULL RSS key indicates default RSS key. */
rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
+ memcpy(flow->rss.key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /*
+ * rss->level and rss.types should be set in advance when expanding
+ * items for RSS.
+ */
}
/**
@@ -1365,22 +1366,23 @@
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
- struct mlx5_flow *flow;
+ size_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
+ struct mlx5_flow *dev_flow;
size += flow_verbs_get_actions_size(actions);
size += flow_verbs_get_items_size(items);
- flow = rte_calloc(__func__, 1, size, 0);
- if (!flow) {
+ dev_flow = rte_calloc(__func__, 1, size, 0);
+ if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"not enough memory to create flow");
return NULL;
}
- flow->verbs.attr = (void *)(flow + 1);
- flow->verbs.specs =
- (uint8_t *)(flow + 1) + sizeof(struct ibv_flow_attr);
- return flow;
+ dev_flow->verbs.attr = (void *)(dev_flow + 1);
+ dev_flow->verbs.specs = (void *)(dev_flow->verbs.attr + 1);
+ dev_flow->ingress = attr->ingress;
+ dev_flow->transfer = attr->transfer;
+ return dev_flow;
}
/**
@@ -1486,7 +1488,7 @@
flow_verbs_translate_item_ipv4(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV4_LAYER_TYPES,
@@ -1498,7 +1500,7 @@
flow_verbs_translate_item_ipv6(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L3;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel,
MLX5_IPV6_LAYER_TYPES,
@@ -1510,7 +1512,7 @@
flow_verbs_translate_item_tcp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_TCP,
(IBV_RX_HASH_SRC_PORT_TCP |
@@ -1522,7 +1524,7 @@
flow_verbs_translate_item_udp(dev_flow, items,
item_flags);
subpriority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
+ dev_flow->hash_fields |=
mlx5_flow_hashfields_adjust
(dev_flow, tunnel, ETH_RSS_UDP,
(IBV_RX_HASH_SRC_PORT_UDP |
@@ -1667,16 +1669,17 @@
} else {
struct mlx5_hrxq *hrxq;
- hrxq = mlx5_hrxq_get(dev, flow->key,
+ assert(flow->rss.queue);
+ hrxq = mlx5_hrxq_get(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num);
if (!hrxq)
- hrxq = mlx5_hrxq_new(dev, flow->key,
+ hrxq = mlx5_hrxq_new(dev, flow->rss.key,
MLX5_RSS_HASH_KEY_LEN,
- verbs->hash_fields,
- (*flow->queue),
+ dev_flow->hash_fields,
+ (*flow->rss.queue),
flow->rss.queue_num,
!!(dev_flow->layers &
MLX5_FLOW_LAYER_TUNNEL));
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 05/19] net/mlx5: update flow functions
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (3 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
` (14 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Update flow creation/destroy functions for future reuse.
List operations can be skipped inside functions and done
separately out of flow creation.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d1661f2..6e6c845 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2736,7 +2736,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * no list insertion occurred, flow is just created,
+ * this is caller's responsibility to track the
+ * created flow.
* @param[in] attr
* Flow rule attributes.
* @param[in] items
@@ -2881,7 +2884,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
- TAILQ_INSERT_TAIL(list, flow, next);
+ if (list)
+ TAILQ_INSERT_TAIL(list, flow, next);
flow_rxq_flags_set(dev, flow);
return flow;
error_before_flow:
@@ -2975,7 +2979,8 @@ struct rte_flow *
* @param dev
* Pointer to Ethernet device.
* @param list
- * Pointer to a TAILQ flow list.
+ * Pointer to a TAILQ flow list. If this parameter NULL,
+ * there is no flow removal from the list.
* @param[in] flow
* Flow to destroy.
*/
@@ -2995,7 +3000,8 @@ struct rte_flow *
mlx5_flow_id_release(priv->sh->flow_id_pool,
flow->hairpin_flow_id);
flow_drv_destroy(dev, flow);
- TAILQ_REMOVE(list, flow, next);
+ if (list)
+ TAILQ_REMOVE(list, flow, next);
rte_free(flow->fdir);
rte_free(flow);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 06/19] net/mlx5: update meta register matcher set
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (4 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
` (13 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Introduce the dedicated matcher register field setup routine.
Update the code to use this unified one.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 171 +++++++++++++++++++---------------------
1 file changed, 82 insertions(+), 89 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b7e8e0a..170726f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4905,6 +4905,78 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add metadata register item to matcher
+ *
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] reg_type
+ * Type of device metadata register
+ * @param[in] value
+ * Register value
+ * @param[in] mask
+ * Register mask
+ */
+static void
+flow_dv_match_meta_reg(void *matcher, void *key,
+ enum modify_reg reg_type,
+ uint32_t data, uint32_t mask)
+{
+ void *misc2_m =
+ MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
+ void *misc2_v =
+ MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
+
+ data &= mask;
+ switch (reg_type) {
+ case REG_A:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data);
+ break;
+ case REG_B:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data);
+ break;
+ case REG_C_0:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, data);
+ break;
+ case REG_C_1:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data);
+ break;
+ case REG_C_2:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data);
+ break;
+ case REG_C_3:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data);
+ break;
+ case REG_C_4:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data);
+ break;
+ case REG_C_5:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data);
+ break;
+ case REG_C_6:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data);
+ break;
+ case REG_C_7:
+ MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask);
+ MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data);
+ break;
+ default:
+ assert(false);
+ break;
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -4922,21 +4994,15 @@ struct field_modify_info modify_tcp[] = {
{
const struct rte_flow_item_meta *meta_m;
const struct rte_flow_item_meta *meta_v;
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
meta_m = (const void *)item->mask;
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v) {
- MLX5_SET(fte_match_set_misc2, misc2_m,
- metadata_reg_a, meta_m->data);
- MLX5_SET(fte_match_set_misc2, misc2_v,
- metadata_reg_a, meta_v->data & meta_m->data);
- }
+ if (meta_v)
+ flow_dv_match_meta_reg(matcher, key, REG_A,
+ rte_cpu_to_be_32(meta_v->data),
+ rte_cpu_to_be_32(meta_m->data));
}
/**
@@ -4953,13 +5019,7 @@ struct field_modify_info modify_tcp[] = {
flow_dv_translate_item_meta_vport(void *matcher, void *key,
uint32_t value, uint32_t mask)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
-
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, mask);
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, value);
+ flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask);
}
/**
@@ -4973,81 +5033,14 @@ struct field_modify_info modify_tcp[] = {
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_tag(void *matcher, void *key,
- const struct rte_flow_item *item)
+flow_dv_translate_mlx5_item_tag(void *matcher, void *key,
+ const struct rte_flow_item *item)
{
- void *misc2_m =
- MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2);
- void *misc2_v =
- MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2);
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
enum modify_reg reg = tag_v->id;
- rte_be32_t value = tag_v->data;
- rte_be32_t mask = tag_m->data;
- switch (reg) {
- case REG_A:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a,
- rte_be_to_cpu_32(value));
- break;
- case REG_B:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_0:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_1:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_2:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_3:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_4:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_5:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_6:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6,
- rte_be_to_cpu_32(value));
- break;
- case REG_C_7:
- MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7,
- rte_be_to_cpu_32(mask));
- MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7,
- rte_be_to_cpu_32(value));
- break;
- }
+ flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
/**
@@ -6179,8 +6172,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
- flow_dv_translate_item_tag(match_mask, match_value,
- items);
+ flow_dv_translate_mlx5_item_tag(match_mask,
+ match_value, items);
last_item = MLX5_FLOW_ITEM_TAG;
break;
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 07/19] net/mlx5: rename structure and function
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (5 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
` (12 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
There are some renaming:
- in the DV flow engine overall: flow_d_* -> flow_dv_*
- in flow_dv_translate(): res -> mhdr_res
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 151 ++++++++++++++++++++--------------------
1 file changed, 76 insertions(+), 75 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 170726f..9b2eba5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -183,7 +183,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to the rte_eth_dev structure.
*/
static void
-flow_d_shared_lock(struct rte_eth_dev *dev)
+flow_dv_shared_lock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -198,7 +198,7 @@ struct field_modify_info modify_tcp[] = {
}
static void
-flow_d_shared_unlock(struct rte_eth_dev *dev)
+flow_dv_shared_unlock(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
@@ -5599,7 +5599,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Fill the flow with DV spec.
+ * Fill the flow with DV spec, lock free
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to rte_eth_dev structure.
@@ -5618,12 +5619,12 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+__flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow *flow = dev_flow->flow;
@@ -5638,7 +5639,7 @@ struct field_modify_info modify_tcp[] = {
};
int actions_n = 0;
bool actions_end = false;
- struct mlx5_flow_dv_modify_hdr_resource res = {
+ struct mlx5_flow_dv_modify_hdr_resource mhdr_res = {
.ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX
};
@@ -5658,7 +5659,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
dev_flow->group = table;
if (attr->transfer)
- res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
+ mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; !actions_end ; actions++) {
@@ -5807,7 +5808,7 @@ struct field_modify_info modify_tcp[] = {
mlx5_update_vlan_vid_pcp(actions, &vlan);
/* If no VLAN push - this is a modify header action */
if (flow_dv_convert_action_modify_vlan_vid
- (&res, actions, error))
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
break;
@@ -5906,8 +5907,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
- if (flow_dv_convert_action_modify_mac(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_mac
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ?
@@ -5916,8 +5917,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
- if (flow_dv_convert_action_modify_ipv4(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv4
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ?
@@ -5926,8 +5927,8 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
- if (flow_dv_convert_action_modify_ipv6(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_ipv6
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ?
@@ -5936,9 +5937,9 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
- if (flow_dv_convert_action_modify_tp(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_tp
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_SET_TP_SRC ?
@@ -5946,23 +5947,22 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_SET_TP_DST;
break;
case RTE_FLOW_ACTION_TYPE_DEC_TTL:
- if (flow_dv_convert_action_modify_dec_ttl(&res, items,
- &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_dec_ttl
+ (&mhdr_res, items, &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_DEC_TTL;
break;
case RTE_FLOW_ACTION_TYPE_SET_TTL:
- if (flow_dv_convert_action_modify_ttl(&res, actions,
- items, &flow_attr,
- error))
+ if (flow_dv_convert_action_modify_ttl
+ (&mhdr_res, actions, items,
+ &flow_attr, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TTL;
break;
case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ:
- if (flow_dv_convert_action_modify_tcp_seq(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_seq
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ?
@@ -5972,8 +5972,8 @@ struct field_modify_info modify_tcp[] = {
case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK:
case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK:
- if (flow_dv_convert_action_modify_tcp_ack(&res, actions,
- error))
+ if (flow_dv_convert_action_modify_tcp_ack
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= actions->type ==
RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ?
@@ -5981,14 +5981,14 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
- if (flow_dv_convert_action_set_reg(&res, actions,
- error))
+ if (flow_dv_convert_action_set_reg
+ (&mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
- if (flow_dv_convert_action_copy_mreg(dev, &res,
- actions, error))
+ if (flow_dv_convert_action_copy_mreg
+ (dev, &mhdr_res, actions, error))
return -rte_errno;
action_flags |= MLX5_FLOW_ACTION_SET_TAG;
break;
@@ -5997,9 +5997,7 @@ struct field_modify_info modify_tcp[] = {
if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
- (dev, &res,
- dev_flow,
- error))
+ (dev, &mhdr_res, dev_flow, error))
return -rte_errno;
dev_flow->dv.actions[modify_action_position] =
dev_flow->dv.modify_hdr->verbs_action;
@@ -6217,7 +6215,8 @@ struct field_modify_info modify_tcp[] = {
}
/**
- * Apply the flow to the NIC.
+ * Apply the flow to the NIC, lock free,
+ * (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6230,8 +6229,8 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
- struct rte_flow_error *error)
+__flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct rte_flow_error *error)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6529,6 +6528,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC but keeps it in memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to Ethernet device.
@@ -6536,7 +6536,7 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow_dv *dv;
struct mlx5_flow *dev_flow;
@@ -6564,6 +6564,7 @@ struct field_modify_info modify_tcp[] = {
/**
* Remove the flow from the NIC and the memory.
+ * Lock free, (mutex should be acquired by caller).
*
* @param[in] dev
* Pointer to the Ethernet device structure.
@@ -6571,13 +6572,13 @@ struct field_modify_info modify_tcp[] = {
* Pointer to flow structure.
*/
static void
-flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
struct mlx5_flow *dev_flow;
if (!flow)
return;
- flow_dv_remove(dev, flow);
+ __flow_dv_remove(dev, flow);
if (flow->counter) {
flow_dv_counter_release(dev, flow->counter);
flow->counter = NULL;
@@ -6688,69 +6689,69 @@ struct field_modify_info modify_tcp[] = {
}
/*
- * Mutex-protected thunk to flow_dv_translate().
+ * Mutex-protected thunk to lock-free __flow_dv_translate().
*/
static int
-flow_d_translate(struct rte_eth_dev *dev,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
+flow_dv_translate(struct rte_eth_dev *dev,
+ struct mlx5_flow *dev_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_translate(dev, dev_flow, attr, items, actions, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_translate(dev, dev_flow, attr, items, actions, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_apply().
+ * Mutex-protected thunk to lock-free __flow_dv_apply().
*/
static int
-flow_d_apply(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
+flow_dv_apply(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
{
int ret;
- flow_d_shared_lock(dev);
- ret = flow_dv_apply(dev, flow, error);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_apply(dev, flow, error);
+ flow_dv_shared_unlock(dev);
return ret;
}
/*
- * Mutex-protected thunk to flow_dv_remove().
+ * Mutex-protected thunk to lock-free __flow_dv_remove().
*/
static void
-flow_d_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_remove(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_remove(dev, flow);
+ flow_dv_shared_unlock(dev);
}
/*
- * Mutex-protected thunk to flow_dv_destroy().
+ * Mutex-protected thunk to lock-free __flow_dv_destroy().
*/
static void
-flow_d_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
+flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
- flow_d_shared_lock(dev);
- flow_dv_destroy(dev, flow);
- flow_d_shared_unlock(dev);
+ flow_dv_shared_lock(dev);
+ __flow_dv_destroy(dev, flow);
+ flow_dv_shared_unlock(dev);
}
const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.validate = flow_dv_validate,
.prepare = flow_dv_prepare,
- .translate = flow_d_translate,
- .apply = flow_d_apply,
- .remove = flow_d_remove,
- .destroy = flow_d_destroy,
+ .translate = flow_dv_translate,
+ .apply = flow_dv_apply,
+ .remove = flow_dv_remove,
+ .destroy = flow_dv_destroy,
.query = flow_dv_query,
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 08/19] net/mlx5: check metadata registers availability
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (6 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
` (11 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The metadata registers reg_c provide support for TAG and
SET_TAG features. Although there are 8 registers are available
on the current mlx5 devices, some of them can be reserved.
The availability should be queried by iterative trial-and-error
implemented by mlx5_flow_discover_mreg_c() routine.
If reg_c is available, it can be regarded inclusively that
the extensive metadata support is possible. E.g. metadata
register copy action, supporting 16 modify header actions
(instead of 8 by default) preserving register across
different domains (FDB and NIC) and so on.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 11 +++++
drivers/net/mlx5/mlx5.h | 11 ++++-
drivers/net/mlx5/mlx5_ethdev.c | 8 +++-
drivers/net/mlx5/mlx5_flow.c | 98 +++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow.h | 13 ------
drivers/net/mlx5/mlx5_flow_dv.c | 9 ++--
drivers/net/mlx5/mlx5_prm.h | 18 ++++++++
7 files changed, 148 insertions(+), 20 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 72c30bf..1b86b7b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2341,6 +2341,17 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
+ if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
+ DRV_LOG(DEBUG,
+ "port %u extensive metadata register is not supported",
+ eth_dev->data->port_id);
+ }
return eth_dev;
error:
if (priv) {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..6b82c6d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -37,6 +37,7 @@
#include "mlx5_autoconf.h"
#include "mlx5_defs.h"
#include "mlx5_glue.h"
+#include "mlx5_prm.h"
enum {
PCI_VENDOR_ID_MELLANOX = 0x15b3,
@@ -252,6 +253,8 @@ struct mlx5_dev_config {
} mprq; /* Configurations for Multi-Packet RQ. */
int mps; /* Multi-packet send supported mode. */
unsigned int flow_prio; /* Number of flow priorities. */
+ enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM];
+ /* Availibility of mreg_c's. */
unsigned int tso_max_payload_sz; /* Maximum TCP payload for TSO. */
unsigned int ind_table_max_size; /* Maximum indirection table size. */
unsigned int max_dump_files_num; /* Maximum dump files per queue. */
@@ -561,6 +564,10 @@ struct mlx5_flow_tbl_resource {
#define MLX5_MAX_TABLES UINT16_MAX
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
+/* Reserve the last two tables for metadata register copy. */
+#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
+ (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -786,7 +793,7 @@ int mlx5_dev_to_pci_addr(const char *dev_path,
int mlx5_is_removed(struct rte_eth_dev *dev);
eth_tx_burst_t mlx5_select_tx_function(struct rte_eth_dev *dev);
eth_rx_burst_t mlx5_select_rx_function(struct rte_eth_dev *dev);
-struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port);
+struct mlx5_priv *mlx5_port_to_eswitch_info(uint16_t port, bool valid);
struct mlx5_priv *mlx5_dev_to_eswitch_info(struct rte_eth_dev *dev);
int mlx5_sysfs_switch_info(unsigned int ifindex,
struct mlx5_switch_info *info);
@@ -866,6 +873,8 @@ int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
/* mlx5_flow.c */
+int mlx5_flow_discover_mreg_c(struct rte_eth_dev *eth_dev);
+bool mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev);
int mlx5_flow_discover_priorities(struct rte_eth_dev *dev);
void mlx5_flow_print(struct rte_flow *flow);
int mlx5_flow_validate(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..2b7c867 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -1793,6 +1793,10 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
*
* @param[in] port
* Device port id.
+ * @param[in] valid
+ * Device port id is valid, skip check. This flag is useful
+ * when trials are performed from probing and device is not
+ * flagged as valid yet (in attaching process).
* @param[out] es_domain_id
* E-Switch domain id.
* @param[out] es_port_id
@@ -1803,7 +1807,7 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size)
* on success, NULL otherwise and rte_errno is set.
*/
struct mlx5_priv *
-mlx5_port_to_eswitch_info(uint16_t port)
+mlx5_port_to_eswitch_info(uint16_t port, bool valid)
{
struct rte_eth_dev *dev;
struct mlx5_priv *priv;
@@ -1812,7 +1816,7 @@ struct mlx5_priv *
rte_errno = EINVAL;
return NULL;
}
- if (!rte_eth_dev_is_valid_port(port)) {
+ if (!valid && !rte_eth_dev_is_valid_port(port)) {
rte_errno = ENODEV;
return NULL;
}
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6e6c845..f32ea8d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -368,6 +368,33 @@ static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
NULL, "invalid feature name");
}
+
+/**
+ * Check extensive flow metadata register support.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * True if device supports extensive flow metadata register, otherwise false.
+ */
+bool
+mlx5_flow_ext_mreg_supported(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
+ /*
+ * Having available reg_c can be regarded inclusively as supporting
+ * extensive flow metadata register, which could mean,
+ * - metadata register copy action by modify header.
+ * - 16 modify header actions is supported.
+ * - reg_c's are preserved across different domain (FDB and NIC) on
+ * packet loopback by flow lookup miss.
+ */
+ return config->flow_mreg_c[2] != REG_NONE;
+}
+
/**
* Discover the maximum number of priority available.
*
@@ -4033,3 +4060,74 @@ struct rte_flow *
}
return 0;
}
+
+/**
+ * Discover availability of metadata reg_c's.
+ *
+ * Iteratively use test flows to check availability.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ enum modify_reg idx;
+ int n = 0;
+
+ /* reg_c[0] and reg_c[1] are reserved. */
+ config->flow_mreg_c[n++] = REG_C_0;
+ config->flow_mreg_c[n++] = REG_C_1;
+ /* Discover availability of other reg_c's. */
+ for (idx = REG_C_2; idx <= REG_C_7; ++idx) {
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .priority = MLX5_FLOW_PRIO_RSVD,
+ .ingress = 1,
+ };
+ struct rte_flow_item items[] = {
+ [0] = {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action actions[] = {
+ [0] = {
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &(struct mlx5_flow_action_copy_mreg){
+ .src = REG_C_1,
+ .dst = idx,
+ },
+ },
+ [1] = {
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &(struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ },
+ },
+ [2] = {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ struct rte_flow *flow;
+ struct rte_flow_error error;
+
+ if (!config->dv_flow_en)
+ break;
+ /* Create internal flow, validation skips copy action. */
+ flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, &error);
+ if (!flow)
+ continue;
+ if (dev->data->dev_started || !flow_drv_apply(dev, flow, NULL))
+ config->flow_mreg_c[n++] = idx;
+ flow_list_destroy(dev, NULL, flow);
+ }
+ for (; n < MLX5_MREG_C_NUM; ++n)
+ config->flow_mreg_c[n] = REG_NONE;
+ return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b9a9507..f2b6726 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -27,19 +27,6 @@
#include "mlx5.h"
#include "mlx5_prm.h"
-enum modify_reg {
- REG_A,
- REG_B,
- REG_C_0,
- REG_C_1,
- REG_C_2,
- REG_C_3,
- REG_C_4,
- REG_C_5,
- REG_C_6,
- REG_C_7,
-};
-
/* Private rte flow items. */
enum mlx5_rte_flow_item_type {
MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 9b2eba5..da3589f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -832,6 +832,7 @@ struct field_modify_info modify_tcp[] = {
}
static enum mlx5_modification_field reg_to_field[] = {
+ [REG_NONE] = MLX5_MODI_OUT_NONE,
[REG_A] = MLX5_MODI_META_DATA_REG_A,
[REG_B] = MLX5_MODI_META_DATA_REG_B,
[REG_C_0] = MLX5_MODI_META_REG_C_0,
@@ -1040,7 +1041,7 @@ struct field_modify_info modify_tcp[] = {
return ret;
if (!spec)
return 0;
- esw_priv = mlx5_port_to_eswitch_info(spec->id);
+ esw_priv = mlx5_port_to_eswitch_info(spec->id, false);
if (!esw_priv)
return rte_flow_error_set(error, rte_errno,
RTE_FLOW_ERROR_TYPE_ITEM_SPEC, spec,
@@ -2697,7 +2698,7 @@ struct field_modify_info modify_tcp[] = {
"failed to obtain E-Switch info");
port_id = action->conf;
port = port_id->original ? dev->data->port_id : port_id->id;
- act_priv = mlx5_port_to_eswitch_info(port);
+ act_priv = mlx5_port_to_eswitch_info(port, false);
if (!act_priv)
return rte_flow_error_set
(error, rte_errno,
@@ -5092,7 +5093,7 @@ struct field_modify_info modify_tcp[] = {
mask = pid_m ? pid_m->id : 0xffff;
id = pid_v ? pid_v->id : dev->data->port_id;
- priv = mlx5_port_to_eswitch_info(id);
+ priv = mlx5_port_to_eswitch_info(id, item == NULL);
if (!priv)
return -rte_errno;
/* Translate to vport field or to metadata, depending on mode. */
@@ -5540,7 +5541,7 @@ struct field_modify_info modify_tcp[] = {
(const struct rte_flow_action_port_id *)action->conf;
port = conf->original ? dev->data->port_id : conf->id;
- priv = mlx5_port_to_eswitch_info(port);
+ priv = mlx5_port_to_eswitch_info(port, false);
if (!priv)
return rte_flow_error_set(error, -rte_errno,
RTE_FLOW_ERROR_TYPE_ACTION,
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b9e53f5..c17ba66 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -392,6 +392,7 @@ enum {
/* The field of packet to be modified. */
enum mlx5_modification_field {
+ MLX5_MODI_OUT_NONE = -1,
MLX5_MODI_OUT_SMAC_47_16 = 1,
MLX5_MODI_OUT_SMAC_15_0,
MLX5_MODI_OUT_ETHERTYPE,
@@ -455,6 +456,23 @@ enum mlx5_modification_field {
MLX5_MODI_IN_TCP_ACK_NUM = 0x5C,
};
+/* Total number of metadata reg_c's. */
+#define MLX5_MREG_C_NUM (MLX5_MODI_META_REG_C_7 - MLX5_MODI_META_REG_C_0 + 1)
+
+enum modify_reg {
+ REG_NONE = 0,
+ REG_A,
+ REG_B,
+ REG_C_0,
+ REG_C_1,
+ REG_C_2,
+ REG_C_3,
+ REG_C_4,
+ REG_C_5,
+ REG_C_6,
+ REG_C_7,
+};
+
/* Modification sub command. */
struct mlx5_modification_cmd {
union {
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 09/19] net/mlx5: add devarg for extensive metadata support
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (7 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
` (10 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
The PMD parameter dv_xmeta_en is added to control extensive
metadata support. A nonzero value enables extensive flow
metadata support if device is capable and driver supports it.
This can enable extensive support of MARK and META item of
rte_flow. The newly introduced SET_TAG and SET_META actions
do not depend on dv_xmeta_en parameter, because there is
no compatibility issue for new entities. The dv_xmeta_en is
disabled by default.
There are some possible configurations, depending on parameter
value:
- 0, this is default value, defines the legacy mode, the MARK
and META related actions and items operate only within NIC Tx
and NIC Rx steering domains, no MARK and META information
crosses the domain boundaries. The MARK item is 24 bits wide,
the META item is 32 bits wide.
- 1, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The ``MARK`` item is 24 bits wide, the
META item width depends on kernel and firmware configurations
and might be 0, 16 or 32 bits. Within NIC Tx domain META data
width is 32 bits for compatibility, the actual width of data
transferred to the FDB domain depends on kernel configuration
and may be vary. The actual supported width can be retrieved
in runtime by series of rte_flow_validate() trials.
- 2, this engages extensive metadata mode, the MARK and META
related actions and items operate within all supported steering
domains, including FDB, MARK and META information may cross
the domain boundaries. The META item is 32 bits wide, the MARK
item width depends on kernel and firmware configurations and
might be 0, 16 or 24 bits. The actual supported width can be
retrieved in runtime by series of rte_flow_validate() trials.
If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
ignored and the device is configured to operate in legacy mode (0).
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/mlx5.rst | 49 ++++++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.c | 33 +++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_defs.h | 4 ++++
drivers/net/mlx5/mlx5_prm.h | 3 +++
5 files changed, 90 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..0ccc1c8 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -578,6 +578,55 @@ Run-time configuration
Disabled by default.
+- ``dv_xmeta_en`` parameter [int]
+
+ A nonzero value enables extensive flow metadata support if device is
+ capable and driver supports it. This can enable extensive support of
+ ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
+ ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.
+
+ There are some possible configurations, depending on parameter value:
+
+ - 0, this is default value, defines the legacy mode, the ``MARK`` and
+ ``META`` related actions and items operate only within NIC Tx and
+ NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
+ the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
+ item is 32 bits wide and match supported on egress only.
+
+ - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
+ compatibility, the actual width of data transferred to the FDB domain
+ depends on kernel configuration and may be vary. The actual supported
+ width can be retrieved in runtime by series of rte_flow_validate()
+ trials.
+
+ - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
+ related actions and items operate within all supported steering domains,
+ including FDB, ``MARK`` and ``META`` information may cross the domain
+ boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
+ depends on kernel and firmware configurations and might be 0, 16 or
+ 24 bits. The actual supported width can be retrieved in runtime by
+ series of rte_flow_validate() trials.
+
+ +------+-----------+-----------+-------------+-------------+
+ | Mode | ``MARK`` | ``META`` | ``META`` Tx | FDB/Through |
+ +======+===========+===========+=============+=============+
+ | 0 | 24 bits | 32 bits | 32 bits | no |
+ +------+-----------+-----------+-------------+-------------+
+ | 1 | 24 bits | vary 0-32 | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+ | 2 | vary 0-32 | 32 bits | 32 bits | yes |
+ +------+-----------+-----------+-------------+-------------+
+
+ If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
+ ignored and the device is configured to operate in legacy mode (0).
+
+ Disabled by default (set to 0).
+
- ``dv_flow_en`` parameter [int]
A nonzero value enables the DV flow steering assuming it is supported
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1b86b7b..943d0e8 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -125,6 +125,9 @@
/* Activate DV flow steering. */
#define MLX5_DV_FLOW_EN "dv_flow_en"
+/* Enable extensive flow metadata support. */
+#define MLX5_DV_XMETA_EN "dv_xmeta_en"
+
/* Activate Netlink support in VF mode. */
#define MLX5_VF_NL_EN "vf_nl_en"
@@ -1310,6 +1313,16 @@ struct mlx5_flow_id_pool *
config->dv_esw_en = !!tmp;
} else if (strcmp(MLX5_DV_FLOW_EN, key) == 0) {
config->dv_flow_en = !!tmp;
+ } else if (strcmp(MLX5_DV_XMETA_EN, key) == 0) {
+ if (tmp != MLX5_XMETA_MODE_LEGACY &&
+ tmp != MLX5_XMETA_MODE_META16 &&
+ tmp != MLX5_XMETA_MODE_META32) {
+ DRV_LOG(WARNING, "invalid extensive "
+ "metadata parameter");
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ config->dv_xmeta_en = tmp;
} else if (strcmp(MLX5_MR_EXT_MEMSEG_EN, key) == 0) {
config->mr_ext_memseg_en = !!tmp;
} else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) {
@@ -1361,6 +1374,7 @@ struct mlx5_flow_id_pool *
MLX5_VF_NL_EN,
MLX5_DV_ESW_EN,
MLX5_DV_FLOW_EN,
+ MLX5_DV_XMETA_EN,
MLX5_MR_EXT_MEMSEG_EN,
MLX5_REPRESENTOR,
MLX5_MAX_DUMP_FILES_NUM,
@@ -1734,6 +1748,12 @@ struct mlx5_flow_id_pool *
rte_errno = EINVAL;
return rte_errno;
}
+ if (sh_conf->dv_xmeta_en ^ config->dv_xmeta_en) {
+ DRV_LOG(ERR, "\"dv_xmeta_en\" configuration mismatch"
+ " for shared %s context", sh->ibdev_name);
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
return 0;
}
/**
@@ -2347,10 +2367,23 @@ struct mlx5_flow_id_pool *
err = -err;
goto error;
}
+ if (!priv->config.dv_esw_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(WARNING, "metadata mode %u is not supported "
+ "(no E-Switch)", priv->config.dv_xmeta_en);
+ priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
eth_dev->data->port_id);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata registers available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
}
return eth_dev;
error:
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6b82c6d..e59f8f6 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -238,6 +238,7 @@ struct mlx5_dev_config {
unsigned int vf_nl_en:1; /* Enable Netlink requests in VF mode. */
unsigned int dv_esw_en:1; /* Enable E-Switch DV flow. */
unsigned int dv_flow_en:1; /* Enable DV flow. */
+ unsigned int dv_xmeta_en:2; /* Enable extensive flow metadata. */
unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */
unsigned int devx:1; /* Whether devx interface is available or not. */
unsigned int dest_tir:1; /* Whether advanced DR API is available. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index e36ab55..a77c430 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -141,6 +141,10 @@
/* Cache size of mempool for Multi-Packet RQ. */
#define MLX5_MPRQ_MP_CACHE_SZ 32U
+#define MLX5_XMETA_MODE_LEGACY 0
+#define MLX5_XMETA_MODE_META16 1
+#define MLX5_XMETA_MODE_META32 2
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index c17ba66..b405cb6 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -226,6 +226,9 @@
/* Default mark value used when none is provided. */
#define MLX5_FLOW_MARK_DEFAULT 0xffffff
+/* Default mark mask for metadata legacy mode. */
+#define MLX5_FLOW_MARK_MASK 0xffffff
+
/* Maximum number of DS in WQE. Limited by 6-bit field. */
#define MLX5_DSEG_MAX 63
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 10/19] net/mlx5: adjust shared register according to mask
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (8 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
` (9 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The metadata register reg_c[0] might be used by kernel or
firmware for their internal purposes. The actual used mask
can be queried from the kernel. The remaining bits can be
used by PMD to provide META or MARK feature. The code queries
the mask of reg_c[0] and adjust the resource usage dynamically.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 95 +++++++++++++++++++++++++++++++++++------
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_flow_dv.c | 41 ++++++++++++++++--
3 files changed, 122 insertions(+), 17 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 943d0e8..fb7b94b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1584,6 +1584,60 @@ struct mlx5_flow_id_pool *
}
/**
+ * Configures the metadata mask fields in the shared context.
+ *
+ * @param [in] dev
+ * Pointer to Ethernet device.
+ */
+static void
+mlx5_set_metadata_mask(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ibv_shared *sh = priv->sh;
+ uint32_t meta, mark, reg_c0;
+
+ reg_c0 = ~priv->vport_meta_mask;
+ switch (priv->config.dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ meta = UINT32_MAX;
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META16:
+ meta = reg_c0 >> rte_bsf32(reg_c0);
+ mark = MLX5_FLOW_MARK_MASK;
+ break;
+ case MLX5_XMETA_MODE_META32:
+ meta = UINT32_MAX;
+ mark = (reg_c0 >> rte_bsf32(reg_c0)) & MLX5_FLOW_MARK_MASK;
+ break;
+ default:
+ meta = 0;
+ mark = 0;
+ assert(false);
+ break;
+ }
+ if (sh->dv_mark_mask && sh->dv_mark_mask != mark)
+ DRV_LOG(WARNING, "metadata MARK mask mismatche %08X:%08X",
+ sh->dv_mark_mask, mark);
+ else
+ sh->dv_mark_mask = mark;
+ if (sh->dv_meta_mask && sh->dv_meta_mask != meta)
+ DRV_LOG(WARNING, "metadata META mask mismatche %08X:%08X",
+ sh->dv_meta_mask, meta);
+ else
+ sh->dv_meta_mask = meta;
+ if (sh->dv_regc0_mask && sh->dv_regc0_mask != reg_c0)
+ DRV_LOG(WARNING, "metadata reg_c0 mask mismatche %08X:%08X",
+ sh->dv_meta_mask, reg_c0);
+ else
+ sh->dv_regc0_mask = reg_c0;
+ DRV_LOG(DEBUG, "metadata mode %u", priv->config.dv_xmeta_en);
+ DRV_LOG(DEBUG, "metadata MARK mask %08X", sh->dv_mark_mask);
+ DRV_LOG(DEBUG, "metadata META mask %08X", sh->dv_meta_mask);
+ DRV_LOG(DEBUG, "metadata reg_c0 mask %08X", sh->dv_regc0_mask);
+}
+
+/**
* Allocate page of door-bells and register it using DevX API.
*
* @param [in] dev
@@ -1803,7 +1857,7 @@ struct mlx5_flow_id_pool *
uint16_t port_id;
unsigned int i;
#ifdef HAVE_MLX5DV_DR_DEVX_PORT
- struct mlx5dv_devx_port devx_port;
+ struct mlx5dv_devx_port devx_port = { .comp_mask = 0 };
#endif
/* Determine if this port representor is supposed to be spawned. */
@@ -2035,13 +2089,17 @@ struct mlx5_flow_id_pool *
* vport index. The engaged part of metadata register is
* defined by mask.
*/
- devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
- MLX5DV_DEVX_PORT_MATCH_REG_C_0;
- err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port, &devx_port);
- if (err) {
- DRV_LOG(WARNING, "can't query devx port %d on device %s",
- spawn->ibv_port, spawn->ibv_dev->name);
- devx_port.comp_mask = 0;
+ if (switch_info->representor || switch_info->master) {
+ devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
+ MLX5DV_DEVX_PORT_MATCH_REG_C_0;
+ err = mlx5_glue->devx_port_query(sh->ctx, spawn->ibv_port,
+ &devx_port);
+ if (err) {
+ DRV_LOG(WARNING,
+ "can't query devx port %d on device %s",
+ spawn->ibv_port, spawn->ibv_dev->name);
+ devx_port.comp_mask = 0;
+ }
}
if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) {
priv->vport_meta_tag = devx_port.reg_c_0.value;
@@ -2361,18 +2419,27 @@ struct mlx5_flow_id_pool *
goto error;
}
priv->config.flow_prio = err;
- /* Query availibility of metadata reg_c's. */
- err = mlx5_flow_discover_mreg_c(eth_dev);
- if (err < 0) {
- err = -err;
- goto error;
- }
if (!priv->config.dv_esw_en &&
priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
DRV_LOG(WARNING, "metadata mode %u is not supported "
"(no E-Switch)", priv->config.dv_xmeta_en);
priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
}
+ mlx5_set_metadata_mask(eth_dev);
+ if (priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ !priv->sh->dv_regc0_mask) {
+ DRV_LOG(ERR, "metadata mode %u is not supported "
+ "(no metadata reg_c[0] is available)",
+ priv->config.dv_xmeta_en);
+ err = ENOTSUP;
+ goto error;
+ }
+ /* Query availibility of metadata reg_c's. */
+ err = mlx5_flow_discover_mreg_c(eth_dev);
+ if (err < 0) {
+ err = -err;
+ goto error;
+ }
if (!mlx5_flow_ext_mreg_supported(eth_dev)) {
DRV_LOG(DEBUG,
"port %u extensive metadata register is not supported",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e59f8f6..92d445a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -622,6 +622,9 @@ struct mlx5_ibv_shared {
} mr;
/* Shared DV/DR flow data section. */
pthread_mutex_t dv_mutex; /* DV context mutex. */
+ uint32_t dv_meta_mask; /* flow META metadata supported mask. */
+ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */
+ uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */
uint32_t dv_refcnt; /* DV/DR data reference counter. */
void *fdb_domain; /* FDB Direct Rules name space handle. */
struct mlx5_flow_tbl_resource fdb_tbl[MLX5_MAX_TABLES_FDB];
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index da3589f..fb56329 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -901,13 +901,13 @@ struct field_modify_info modify_tcp[] = {
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev __rte_unused,
+flow_dv_convert_action_copy_mreg(struct rte_eth_dev *dev,
struct mlx5_flow_dv_modify_hdr_resource *res,
const struct rte_flow_action *action,
struct rte_flow_error *error)
{
const struct mlx5_flow_action_copy_mreg *conf = action->conf;
- uint32_t mask = RTE_BE32(UINT32_MAX);
+ rte_be32_t mask = RTE_BE32(UINT32_MAX);
struct rte_flow_item item = {
.spec = NULL,
.mask = &mask,
@@ -917,9 +917,44 @@ struct field_modify_info modify_tcp[] = {
{0, 0, 0},
};
struct field_modify_info reg_dst = {
- .offset = (uint32_t)-1, /* Same as src. */
+ .offset = 0,
.id = reg_to_field[conf->dst],
};
+ /* Adjust reg_c[0] usage according to reported mask. */
+ if (conf->dst == REG_C_0 || conf->src == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t reg_c0 = priv->sh->dv_regc0_mask;
+
+ assert(reg_c0);
+ assert(priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY);
+ if (conf->dst == REG_C_0) {
+ /* Copy to reg_c[0], within mask only. */
+ reg_dst.offset = rte_bsf32(reg_c0);
+ /*
+ * Mask is ignoring the enianness, because
+ * there is no conversion in datapath.
+ */
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ /* Copy from destination lower bits to reg_c[0]. */
+ mask = reg_c0 >> reg_dst.offset;
+#else
+ /* Copy from destination upper bits to reg_c[0]. */
+ mask = reg_c0 << (sizeof(reg_c0) * CHAR_BIT -
+ rte_fls_u32(reg_c0));
+#endif
+ } else {
+ mask = rte_cpu_to_be_32(reg_c0);
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ /* Copy from reg_c[0] to destination lower bits. */
+ reg_dst.offset = 0;
+#else
+ /* Copy from reg_c[0] to destination upper bits. */
+ reg_dst.offset = sizeof(reg_c0) * CHAR_BIT -
+ (rte_fls_u32(reg_c0) -
+ rte_bsf32(reg_c0));
+#endif
+ }
+ }
return flow_dv_convert_modify_action(&item,
reg_src, ®_dst, res,
MLX5_MODIFICATION_TYPE_COPY,
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 11/19] net/mlx5: check the maximal modify actions number
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (9 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
` (8 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
If the extensive metadata registers are supported,
it can be regarded inclusively that the extensive
metadata support is possible. E.g. metadata register
copy action, supporting 16 modify header actions,
reserving register across different steering domain
(FDB and NIC) and so on.
This patch handles the maximal amount of header modify
actions depending on discovered metadata registers
support.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 9 +++++++--
drivers/net/mlx5/mlx5_flow_dv.c | 25 +++++++++++++++++++++++++
2 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f2b6726..c1d0a65 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -348,8 +348,13 @@ struct mlx5_flow_dv_tag_resource {
uint32_t tag; /**< the tag value. */
};
-/* Number of modification commands. */
-#define MLX5_MODIFY_NUM 8
+/*
+ * Number of modification commands.
+ * If extensive metadata registers are supported
+ * the maximal actions amount is 16 and 8 otherwise.
+ */
+#define MLX5_MODIFY_NUM 16
+#define MLX5_MODIFY_NUM_NO_MREG 8
/* Modify resource structure */
struct mlx5_flow_dv_modify_hdr_resource {
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index fb56329..80280ab 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2749,6 +2749,27 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get the maximum number of modify header actions.
+ *
+ * @param dev
+ * Pointer to rte_eth_dev structure.
+ *
+ * @return
+ * Max number of modify header actions device can support.
+ */
+static unsigned int
+flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev)
+{
+ /*
+ * There's no way to directly query the max cap. Although it has to be
+ * acquried by iterative trial, it is a safe assumption that more
+ * actions are supported by FW if extensive metadata register is
+ * supported.
+ */
+ return mlx5_flow_ext_mreg_supported(dev) ? MLX5_MODIFY_NUM :
+ MLX5_MODIFY_NUM_NO_MREG;
+}
+/**
* Find existing modify-header resource or create and register a new one.
*
* @param dev[in, out]
@@ -2775,6 +2796,10 @@ struct field_modify_info modify_tcp[] = {
struct mlx5_flow_dv_modify_hdr_resource *cache_resource;
struct mlx5dv_dr_domain *ns;
+ if (resource->actions_num > flow_dv_modify_hdr_action_max(dev))
+ return rte_flow_error_set(error, EOVERFLOW,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "too many modify header items");
if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
ns = sh->fdb_domain;
else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 12/19] net/mlx5: update metadata register id query
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (10 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
` (7 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The NIC might support up to 8 extensive metadata registers.
These registers are supposed to be used by multiple features.
There is register id query routine to allow determine which
register is actually used by specified feature.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 88 +++++++++++++++++++++++++++++---------------
drivers/net/mlx5/mlx5_flow.h | 17 +++++++++
2 files changed, 75 insertions(+), 30 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f32ea8d..b87657a 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -316,12 +316,6 @@ struct mlx5_flow_tunnel_info {
},
};
-enum mlx5_feature_name {
- MLX5_HAIRPIN_RX,
- MLX5_HAIRPIN_TX,
- MLX5_APPLICATION,
-};
-
/**
* Translate tag ID to register.
*
@@ -338,37 +332,70 @@ enum mlx5_feature_name {
* The request register on success, a negative errno
* value otherwise and rte_errno is set.
*/
-__rte_unused
-static enum modify_reg flow_get_reg_id(struct rte_eth_dev *dev,
- enum mlx5_feature_name feature,
- uint32_t id,
- struct rte_flow_error *error)
+enum modify_reg
+mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error)
{
- static enum modify_reg id2reg[] = {
- [0] = REG_A,
- [1] = REG_C_2,
- [2] = REG_C_3,
- [3] = REG_C_4,
- [4] = REG_B,};
-
- dev = (void *)dev;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+
switch (feature) {
case MLX5_HAIRPIN_RX:
return REG_B;
case MLX5_HAIRPIN_TX:
return REG_A;
- case MLX5_APPLICATION:
- if (id > 4)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "invalid tag id");
- return id2reg[id];
+ case MLX5_METADATA_RX:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_B;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_0;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_1;
+ }
+ break;
+ case MLX5_METADATA_TX:
+ return REG_A;
+ case MLX5_METADATA_FDB:
+ return REG_C_0;
+ case MLX5_FLOW_MARK:
+ switch (config->dv_xmeta_en) {
+ case MLX5_XMETA_MODE_LEGACY:
+ return REG_NONE;
+ case MLX5_XMETA_MODE_META16:
+ return REG_C_1;
+ case MLX5_XMETA_MODE_META32:
+ return REG_C_0;
+ }
+ break;
+ case MLX5_COPY_MARK:
+ return REG_C_3;
+ case MLX5_APP_TAG:
+ /*
+ * Suppose engaging reg_c_2 .. reg_c_7 registers.
+ * reg_c_2 is reserved for coloring by meters.
+ * reg_c_3 is reserved for split flows TAG.
+ */
+ if (id > (REG_C_7 - REG_C_4))
+ return rte_flow_error_set
+ (error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "invalid tag id");
+ if (config->flow_mreg_c[id + REG_C_4 - REG_C_0] == REG_NONE)
+ return rte_flow_error_set
+ (error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "unsupported tag id");
+ return config->flow_mreg_c[id + REG_C_4 - REG_C_0];
}
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+ assert(false);
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "invalid feature name");
}
-
/**
* Check extensive flow metadata register support.
*
@@ -2667,7 +2694,6 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
struct mlx5_rte_flow_item_tag *tag_item;
struct rte_flow_item *item;
char *addr;
- struct rte_flow_error error;
int encap = 0;
mlx5_flow_id_get(priv->sh->flow_id_pool, flow_id);
@@ -2733,7 +2759,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
rte_memcpy(actions_rx, actions, sizeof(struct rte_flow_action));
actions_rx++;
set_tag = (void *)actions_rx;
- set_tag->id = flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, &error);
+ set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_RX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
set_tag->data = *flow_id;
tag_action->conf = set_tag;
/* Create Tx item list. */
@@ -2743,7 +2770,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
item->type = MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item = (void *)addr;
tag_item->data = *flow_id;
- tag_item->id = flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ tag_item->id = mlx5_flow_get_reg_id(dev, MLX5_HAIRPIN_TX, 0, NULL);
+ assert(set_tag->id > REG_NONE);
item->spec = tag_item;
addr += sizeof(struct mlx5_rte_flow_item_tag);
tag_item = (void *)addr;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c1d0a65..9371e11 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -63,6 +63,18 @@ struct mlx5_rte_flow_item_tx_queue {
uint32_t queue;
};
+/* Feature name to allocate metadata register. */
+enum mlx5_feature_name {
+ MLX5_HAIRPIN_RX,
+ MLX5_HAIRPIN_TX,
+ MLX5_METADATA_RX,
+ MLX5_METADATA_TX,
+ MLX5_METADATA_FDB,
+ MLX5_FLOW_MARK,
+ MLX5_APP_TAG,
+ MLX5_COPY_MARK,
+};
+
/* Pattern outer Layer bits. */
#define MLX5_FLOW_LAYER_OUTER_L2 (1u << 0)
#define MLX5_FLOW_LAYER_OUTER_L3_IPV4 (1u << 1)
@@ -534,6 +546,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_query_t query;
};
+
#define MLX5_CNT_CONTAINER(sh, batch, thread) (&(sh)->cmng.ccont \
[(((sh)->cmng.mhi[batch] >> (thread)) & 0x1) * 2 + (batch)])
#define MLX5_CNT_CONTAINER_UNUSED(sh, batch, thread) (&(sh)->cmng.ccont \
@@ -554,6 +567,10 @@ uint64_t mlx5_flow_hashfields_adjust(struct mlx5_flow *dev_flow, int tunnel,
uint64_t hash_fields);
uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
uint32_t subpriority);
+enum modify_reg mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
+ enum mlx5_feature_name feature,
+ uint32_t id,
+ struct rte_flow_error *error);
const struct rte_flow_action *mlx5_flow_find_action
(const struct rte_flow_action *actions,
enum rte_flow_action_type action);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 13/19] net/mlx5: add flow tag support
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (11 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
` (6 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Add support of new rte_flow item and action - TAG and SET_TAG. TAG is
a transient value which can be kept during flow matching.
This is supported through device metadata register reg_c[]. Although
there are 8 registers are available on the current mlx5 device,
some of them can be reserved for firmware or kernel purposes.
The availability should be queried by iterative trial-and-error
mlx5_flow_discover_mreg_c() routine.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 232 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 228 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 80280ab..fec2efe 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -872,10 +872,12 @@ struct field_modify_info modify_tcp[] = {
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"too many items to modify");
+ assert(conf->id != REG_NONE);
+ assert(conf->id < RTE_DIM(reg_to_field));
actions[i].action_type = MLX5_MODIFICATION_TYPE_SET;
actions[i].field = reg_to_field[conf->id];
actions[i].data0 = rte_cpu_to_be_32(actions[i].data0);
- actions[i].data1 = conf->data;
+ actions[i].data1 = rte_cpu_to_be_32(conf->data);
++i;
resource->actions_num = i;
if (!resource->actions_num)
@@ -886,6 +888,52 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert SET_TAG action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_tag
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_action_set_tag *conf,
+ struct rte_flow_error *error)
+{
+ rte_be32_t data = rte_cpu_to_be_32(conf->data);
+ rte_be32_t mask = rte_cpu_to_be_32(conf->mask);
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum mlx5_modification_field reg_type;
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ assert((unsigned int)ret < RTE_DIM(reg_to_field));
+ reg_type = reg_to_field[ret];
+ assert(reg_type > 0);
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_type};
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Convert internal COPY_REG action to DV specification.
*
* @param[in] dev
@@ -1016,6 +1064,65 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate TAG item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tag *spec = item->spec;
+ const struct rte_flow_item_tag *mask = item->mask;
+ const struct rte_flow_item_tag nic_mask = {
+ .data = RTE_BE32(UINT32_MAX),
+ .index = 0xff,
+ };
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extensive metadata register"
+ " isn't supported");
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (!mask)
+ mask = &rte_flow_item_tag_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_tag),
+ error);
+ if (ret < 0)
+ return ret;
+ if (mask->index != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
+ "partial mask for tag index"
+ " is not supported");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, spec->index, error);
+ if (ret < 0)
+ return ret;
+ assert(ret != REG_NONE);
+ return 0;
+}
+
+/**
* Validate vport item.
*
* @param[in] dev
@@ -1376,6 +1483,62 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_TAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_tag(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_tag *conf;
+ const uint64_t terminal_action_flags =
+ MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE |
+ MLX5_FLOW_ACTION_RSS;
+ int ret;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extensive metadata register"
+ " isn't supported");
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_tag *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, conf->index, error);
+ if (ret < 0)
+ return ret;
+ if (!attr->transfer && attr->ingress &&
+ (action_flags & terminal_action_flags))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "set_tag has no effect"
+ " with terminal actions");
+ return 0;
+}
+
+/**
* Validate count action.
*
* @param[in] dev
@@ -3765,6 +3928,13 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ ret = flow_dv_validate_item_tag(dev, items,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE:
break;
@@ -3812,6 +3982,17 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK;
++actions_n;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ ret = flow_dv_validate_action_set_tag(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
ret = mlx5_flow_validate_action_drop(action_flags,
attr, error);
@@ -5099,8 +5280,38 @@ struct field_modify_info modify_tcp[] = {
{
const struct mlx5_rte_flow_item_tag *tag_v = item->spec;
const struct mlx5_rte_flow_item_tag *tag_m = item->mask;
- enum modify_reg reg = tag_v->id;
+ assert(tag_v);
+ flow_dv_match_meta_reg(matcher, key, tag_v->id, tag_v->data,
+ tag_m ? tag_m->data : UINT32_MAX);
+}
+
+/**
+ * Add TAG item to matcher
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_tag(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ const struct rte_flow_item_tag *tag_v = item->spec;
+ const struct rte_flow_item_tag *tag_m = item->mask;
+ enum modify_reg reg;
+
+ assert(tag_v);
+ tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask;
+ /* Get the metadata register index for the tag. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL);
+ assert(reg > 0);
flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data);
}
@@ -5775,6 +5986,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.tag_resource->action;
action_flags |= MLX5_FLOW_ACTION_MARK;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_TAG:
+ if (flow_dv_convert_action_set_tag
+ (dev, &mhdr_res,
+ (const struct rte_flow_action_set_tag *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_TAG;
+ break;
case RTE_FLOW_ACTION_TYPE_DROP:
action_flags |= MLX5_FLOW_ACTION_DROP;
break;
@@ -6055,7 +6274,7 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ACTION_TYPE_END:
actions_end = true;
- if (action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) {
+ if (mhdr_res.actions_num) {
/* create modify action if needed. */
if (flow_dv_modify_hdr_resource_register
(dev, &mhdr_res, dev_flow, error))
@@ -6067,7 +6286,7 @@ struct field_modify_info modify_tcp[] = {
default:
break;
}
- if ((action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS) &&
+ if (mhdr_res.actions_num &&
modify_action_position == UINT32_MAX)
modify_action_position = actions_n++;
}
@@ -6230,6 +6449,11 @@ struct field_modify_info modify_tcp[] = {
items, tunnel);
last_item = MLX5_FLOW_LAYER_ICMP6;
break;
+ case RTE_FLOW_ITEM_TYPE_TAG:
+ flow_dv_translate_item_tag(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_TAG;
+ break;
case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
flow_dv_translate_mlx5_item_tag(match_mask,
match_value, items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 14/19] net/mlx5: extend flow mark support
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (12 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
@ 2019-11-07 17:09 ` Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
` (5 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:09 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Flow MARK item is newly supported along with MARK action. MARK
action and item are supported on both Rx and Tx. It works on the
metadata reg_c[] only if extensive flow metadata register is
supported. Without the support, MARK action behaves same as
before - valid only on Rx and no MARK item is valid.
FLAG action is also modified accordingly. FLAG action is
supported on both Rx and Tx via reg_c[] if extensive flow
metadata register is supported.
However, the new MARK/FLAG item and action are currently
disabled until register copy on loopback is supported by
forthcoming patches.
The actual index of engaged metadata reg_c[] register to
support FLAG/MARK actions depends on dv_xmeta_en devarg value.
For extensive metadata mode 1 the reg_c[1] is used and
transitive MARK data width is 24. For extensive metadata mode 2
the reg_c[0] is used and transitive MARK data width might be
restricted to 0 or 16 bits, depending on kernel usage of reg_c[0].
The actual supported width can be discovered by series of trials
with rte_flow_validate().
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 5 +-
drivers/net/mlx5/mlx5_flow_dv.c | 383 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 370 insertions(+), 18 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9371e11..d6209ff 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -102,6 +102,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ITEM_METADATA (1u << 16)
#define MLX5_FLOW_ITEM_PORT_ID (1u << 17)
#define MLX5_FLOW_ITEM_TAG (1u << 18)
+#define MLX5_FLOW_ITEM_MARK (1u << 19)
/* Pattern MISC bits. */
#define MLX5_FLOW_LAYER_ICMP (1u << 19)
@@ -194,6 +195,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_INC_TCP_ACK (1u << 30)
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
+#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -228,7 +230,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_INC_TCP_ACK | \
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
- MLX5_FLOW_ACTION_SET_TAG)
+ MLX5_FLOW_ACTION_SET_TAG | \
+ MLX5_FLOW_ACTION_MARK_EXT)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index fec2efe..ec13edc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1010,6 +1010,125 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Convert MARK action to DV specification. This routine is used
+ * in extensive metadata only and requires metadata register to be
+ * handled. In legacy mode hardware tag resource is engaged.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] conf
+ * Pointer to MARK action specification.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action_mark *conf,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ rte_be32_t mask = rte_cpu_to_be_32(MLX5_FLOW_MARK_MASK &
+ priv->sh->dv_mark_mask);
+ rte_be32_t data = rte_cpu_to_be_32(conf->id) & mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ {4, 0, 0}, /* dynamic instead of MLX5_MODI_META_REG_C_1. */
+ {0, 0, 0},
+ };
+ enum modify_reg reg;
+
+ if (!mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "zero mark action mask");
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (reg < 0)
+ return reg;
+ assert(reg > 0);
+ reg_c_x[0].id = reg_to_field[reg];
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
+ * Validate MARK item.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_item *item,
+ const struct rte_flow_attr *attr __rte_unused,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_item_mark *spec = item->spec;
+ const struct rte_flow_item_mark *mask = item->mask;
+ const struct rte_flow_item_mark nic_mask = {
+ .id = priv->sh->dv_mark_mask,
+ };
+ int ret;
+
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata feature"
+ " isn't enabled");
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ if (!nic_mask.id)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ if (!spec)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+ item->spec,
+ "data cannot be empty");
+ if (spec->id >= (MLX5_FLOW_MARK_MAX & nic_mask.id))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &spec->id,
+ "mark id exceeds the limit");
+ if (!mask)
+ mask = &nic_mask;
+ ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_mark),
+ error);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+/**
* Validate META item.
*
* @param[in] dev
@@ -1482,6 +1601,139 @@ struct field_modify_info modify_tcp[] = {
return 0;
}
+/*
+ * Validate the FLAG action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_flag(struct rte_eth_dev *dev,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_flag(action_flags, attr,
+ error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support flag action");
+ if (!(priv->sh->dv_mark_mask & MLX5_FLOW_MARK_DEFAULT))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't mark and flag in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 flag"
+ " actions in same flow");
+ return 0;
+}
+
+/**
+ * Validate MARK action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_mark(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action_mark *mark = action->conf;
+ int ret;
+
+ /* Fall back if no extended metadata register support. */
+ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY)
+ return mlx5_flow_validate_action_mark(action, action_flags,
+ attr, error);
+ /* Extensive metadata mode requires registers. */
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "no metadata registers "
+ "to support mark action");
+ if (!priv->sh->dv_mark_mask)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "extended metadata register"
+ " isn't available");
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ assert(ret > 0);
+ if (!mark)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ if (mark->id >= (MLX5_FLOW_MARK_MAX & priv->sh->dv_mark_mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ &mark->id,
+ "mark id exceeds the limit");
+ if (action_flags & MLX5_FLOW_ACTION_DROP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't drop and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't flag and mark in same flow");
+ if (action_flags & MLX5_FLOW_ACTION_MARK)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 mark actions in same"
+ " flow");
+ return 0;
+}
+
/**
* Validate SET_TAG action.
*
@@ -3749,6 +4001,8 @@ struct field_modify_info modify_tcp[] = {
.dst_port = RTE_BE16(UINT16_MAX),
}
};
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
if (items == NULL)
return -1;
@@ -3905,6 +4159,14 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ ret = flow_dv_validate_item_mark(dev, items, attr,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
ret = flow_dv_validate_item_meta(dev, items, attr,
error);
@@ -3966,21 +4228,39 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
- ret = mlx5_flow_validate_action_flag(action_flags,
- attr, error);
+ ret = flow_dv_validate_action_flag(dev, action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_FLAG |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_MARK:
- ret = mlx5_flow_validate_action_mark(actions,
- action_flags,
- attr, error);
+ ret = flow_dv_validate_action_mark(dev, actions,
+ action_flags,
+ attr, error);
if (ret < 0)
return ret;
- action_flags |= MLX5_FLOW_ACTION_MARK;
- ++actions_n;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ /* Count all modify-header actions as one. */
+ if (!(action_flags &
+ MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_MARK |
+ MLX5_FLOW_ACTION_MARK_EXT;
+ } else {
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ ++actions_n;
+ }
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
@@ -4251,12 +4531,14 @@ struct field_modify_info modify_tcp[] = {
" actions in the same rule");
/* Eswitch has few restrictions on using items and actions */
if (attr->transfer) {
- if (action_flags & MLX5_FLOW_ACTION_FLAG)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_FLAG)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
"unsupported action FLAG");
- if (action_flags & MLX5_FLOW_ACTION_MARK)
+ if (!mlx5_flow_ext_mreg_supported(dev) &&
+ action_flags & MLX5_FLOW_ACTION_MARK)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL,
@@ -5219,6 +5501,44 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Add MARK item to matcher
+ *
+ * @param[in] dev
+ * The device to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ */
+static void
+flow_dv_translate_item_mark(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_mark *mark;
+ uint32_t value;
+ uint32_t mask;
+
+ mark = item->mask ? (const void *)item->mask :
+ &rte_flow_item_mark_mask;
+ mask = mark->id & priv->sh->dv_mark_mask;
+ mark = (const void *)item->spec;
+ assert(mark);
+ value = mark->id & priv->sh->dv_mark_mask & mask;
+ if (mask) {
+ enum modify_reg reg;
+
+ /* Get the metadata register index for the mark. */
+ reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, NULL);
+ assert(reg > 0);
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
+}
+
+/**
* Add META item to matcher
*
* @param[in, out] matcher
@@ -5227,8 +5547,6 @@ struct field_modify_info modify_tcp[] = {
* Flow matcher value.
* @param[in] item
* Flow pattern to translate.
- * @param[in] inner
- * Item is inner pattern.
*/
static void
flow_dv_translate_item_meta(void *matcher, void *key,
@@ -5899,6 +6217,7 @@ struct field_modify_info modify_tcp[] = {
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *dev_conf = &priv->config;
struct rte_flow *flow = dev_flow->flow;
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -5933,7 +6252,7 @@ struct field_modify_info modify_tcp[] = {
if (attr->transfer)
mhdr_res.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
if (priority == MLX5_FLOW_PRIO_RSVD)
- priority = priv->config.flow_prio - 1;
+ priority = dev_conf->flow_prio - 1;
for (; !actions_end ; actions++) {
const struct rte_flow_action_queue *queue;
const struct rte_flow_action_rss *rss;
@@ -5964,6 +6283,19 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_PORT_ID;
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ struct rte_flow_action_mark mark = {
+ .id = MLX5_FLOW_MARK_DEFAULT,
+ };
+
+ if (flow_dv_convert_action_mark(dev, &mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
tag_resource.tag =
mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
if (!dev_flow->dv.tag_resource)
@@ -5972,9 +6304,22 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ const struct rte_flow_action_mark *mark =
+ (const struct rte_flow_action_mark *)
+ actions->conf;
+
+ if (flow_dv_convert_action_mark(dev, mark,
+ &mhdr_res,
+ error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
+ break;
+ }
+ /* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
(actions->conf))->id);
@@ -5984,7 +6329,6 @@ struct field_modify_info modify_tcp[] = {
return errno;
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
- action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
@@ -6021,7 +6365,7 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
- if (!priv->config.devx) {
+ if (!dev_conf->devx) {
rte_errno = ENOTSUP;
goto cnt_err;
}
@@ -6434,6 +6778,11 @@ struct field_modify_info modify_tcp[] = {
items, last_item, tunnel);
last_item = MLX5_FLOW_LAYER_MPLS;
break;
+ case RTE_FLOW_ITEM_TYPE_MARK:
+ flow_dv_translate_item_mark(dev, match_mask,
+ match_value, items);
+ last_item = MLX5_FLOW_ITEM_MARK;
+ break;
case RTE_FLOW_ITEM_TYPE_META:
flow_dv_translate_item_meta(match_mask, match_value,
items);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 15/19] net/mlx5: extend flow meta data support
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (13 preceding siblings ...)
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
@ 2019-11-07 17:10 ` Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
` (4 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:10 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
META item is supported on both Rx and Tx. 'transfer' attribute
is also supported. SET_META action is also added.
Due to restriction on reg_c[meta], various bit width might be
available. If devarg parameter dv_xmeta_en=1, the META uses
metadata register reg_c[0], which may be required for internal
kernel or firmware needs. In this case PMD queries kernel about
available fields in reg_c[0] and restricts the register usage
accordingly. If devarg parameter dv_xmeta_en=2, the META feature
uses reg_c[1], there should be no limitations on the data width.
However, extensive MEAT feature is currently disabled until
register copy on loopback is supported by forthcoming patches.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.h | 4 +-
drivers/net/mlx5/mlx5_flow_dv.c | 255 +++++++++++++++++++++++++++++++++++++---
2 files changed, 240 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index d6209ff..ef16aef 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -196,6 +196,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_DEC_TCP_ACK (1u << 31)
#define MLX5_FLOW_ACTION_SET_TAG (1ull << 32)
#define MLX5_FLOW_ACTION_MARK_EXT (1ull << 33)
+#define MLX5_FLOW_ACTION_SET_META (1ull << 34)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -231,7 +232,8 @@ enum mlx5_feature_name {
MLX5_FLOW_ACTION_DEC_TCP_ACK | \
MLX5_FLOW_ACTION_OF_SET_VLAN_VID | \
MLX5_FLOW_ACTION_SET_TAG | \
- MLX5_FLOW_ACTION_MARK_EXT)
+ MLX5_FLOW_ACTION_MARK_EXT | \
+ MLX5_FLOW_ACTION_SET_META)
#define MLX5_FLOW_VLAN_ACTIONS (MLX5_FLOW_ACTION_OF_POP_VLAN | \
MLX5_FLOW_ACTION_OF_PUSH_VLAN)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ec13edc..60ebbca 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1060,6 +1060,103 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Get metadata register index for specified steering domain.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] attr
+ * Attributes of flow to determine steering domain.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * positive index on success, a negative errno value otherwise
+ * and rte_errno is set.
+ */
+static enum modify_reg
+flow_dv_get_metadata_reg(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ enum modify_reg reg =
+ mlx5_flow_get_reg_id(dev, attr->transfer ?
+ MLX5_METADATA_FDB :
+ attr->egress ?
+ MLX5_METADATA_TX :
+ MLX5_METADATA_RX, 0, error);
+ if (reg < 0)
+ return rte_flow_error_set(error,
+ ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL, "unavailable "
+ "metadata register");
+ return reg;
+}
+
+/**
+ * Convert SET_META action to DV specification.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in,out] resource
+ * Pointer to the modify-header resource.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
+ * @param[in] conf
+ * Pointer to action specification.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_convert_action_set_meta
+ (struct rte_eth_dev *dev,
+ struct mlx5_flow_dv_modify_hdr_resource *resource,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_action_set_meta *conf,
+ struct rte_flow_error *error)
+{
+ uint32_t data = conf->data;
+ uint32_t mask = conf->mask;
+ struct rte_flow_item item = {
+ .spec = &data,
+ .mask = &mask,
+ };
+ struct field_modify_info reg_c_x[] = {
+ [1] = {0, 0, 0},
+ };
+ enum modify_reg reg = flow_dv_get_metadata_reg(dev, attr, error);
+
+ if (reg < 0)
+ return reg;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0;
+
+ assert(msk_c0);
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ shl_c0 = rte_bsf32(msk_c0);
+#else
+ shl_c0 = sizeof(msk_c0) * CHAR_BIT - rte_fls_u32(msk_c0);
+#endif
+ mask <<= shl_c0;
+ data <<= shl_c0;
+ assert(!(~msk_c0 & rte_cpu_to_be_32(mask)));
+ }
+ reg_c_x[0] = (struct field_modify_info){4, 0, reg_to_field[reg]};
+ /* The routine expects parameters in memory as big-endian ones. */
+ return flow_dv_convert_modify_action(&item, reg_c_x, NULL, resource,
+ MLX5_MODIFICATION_TYPE_SET, error);
+}
+
+/**
* Validate MARK item.
*
* @param[in] dev
@@ -1149,11 +1246,14 @@ struct field_modify_info modify_tcp[] = {
const struct rte_flow_attr *attr,
struct rte_flow_error *error)
{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
const struct rte_flow_item_meta *spec = item->spec;
const struct rte_flow_item_meta *mask = item->mask;
- const struct rte_flow_item_meta nic_mask = {
+ struct rte_flow_item_meta nic_mask = {
.data = UINT32_MAX
};
+ enum modify_reg reg;
int ret;
if (!spec)
@@ -1163,23 +1263,32 @@ struct field_modify_info modify_tcp[] = {
"data cannot be empty");
if (!spec->data)
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
- NULL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL,
"data cannot be zero");
+ if (config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) {
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg == REG_B)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "match on reg_b "
+ "isn't supported");
+ if (reg != REG_A)
+ nic_mask.data = priv->sh->dv_meta_mask;
+ }
if (!mask)
mask = &rte_flow_item_meta_mask;
ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
(const uint8_t *)&nic_mask,
sizeof(struct rte_flow_item_meta),
error);
- if (ret < 0)
- return ret;
- if (attr->ingress)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
- NULL,
- "pattern not supported for ingress");
- return 0;
+ return ret;
}
/**
@@ -1735,6 +1844,67 @@ struct field_modify_info modify_tcp[] = {
}
/**
+ * Validate SET_META action.
+ *
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the encap action.
+ * @param[in] action_flags
+ * Holds the actions detected until now.
+ * @param[in] attr
+ * Pointer to flow attributes
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_action_set_meta(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ uint64_t action_flags __rte_unused,
+ const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_set_meta *conf;
+ uint32_t nic_mask = UINT32_MAX;
+ enum modify_reg reg;
+
+ if (!mlx5_flow_ext_mreg_supported(dev))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "extended metadata register"
+ " isn't supported");
+ reg = flow_dv_get_metadata_reg(dev, attr, error);
+ if (reg < 0)
+ return reg;
+ if (reg != REG_A && reg != REG_B) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ nic_mask = priv->sh->dv_meta_mask;
+ }
+ if (!(action->conf))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "configuration cannot be null");
+ conf = (const struct rte_flow_action_set_meta *)action->conf;
+ if (!conf->mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero mask doesn't have any effect");
+ if (conf->mask & ~nic_mask)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "meta data must be within reg C0");
+ if (!(conf->data & conf->mask))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "zero value has no effect");
+ return 0;
+}
+
+/**
* Validate SET_TAG action.
*
* @param[in] dev
@@ -4262,6 +4432,17 @@ struct field_modify_info modify_tcp[] = {
++actions_n;
}
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ ret = flow_dv_validate_action_set_meta(dev, actions,
+ action_flags,
+ attr, error);
+ if (ret < 0)
+ return ret;
+ /* Count all modify-header actions as one action. */
+ if (!(action_flags & MLX5_FLOW_MODIFY_HDR_ACTIONS))
+ ++actions_n;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
ret = flow_dv_validate_action_set_tag(dev, actions,
action_flags,
@@ -5541,15 +5722,21 @@ struct field_modify_info modify_tcp[] = {
/**
* Add META item to matcher
*
+ * @param[in] dev
+ * The devich to configure through.
* @param[in, out] matcher
* Flow matcher.
* @param[in, out] key
* Flow matcher value.
+ * @param[in] attr
+ * Attributes of flow that includes this item.
* @param[in] item
* Flow pattern to translate.
*/
static void
-flow_dv_translate_item_meta(void *matcher, void *key,
+flow_dv_translate_item_meta(struct rte_eth_dev *dev,
+ void *matcher, void *key,
+ const struct rte_flow_attr *attr,
const struct rte_flow_item *item)
{
const struct rte_flow_item_meta *meta_m;
@@ -5559,10 +5746,34 @@ struct field_modify_info modify_tcp[] = {
if (!meta_m)
meta_m = &rte_flow_item_meta_mask;
meta_v = (const void *)item->spec;
- if (meta_v)
- flow_dv_match_meta_reg(matcher, key, REG_A,
- rte_cpu_to_be_32(meta_v->data),
- rte_cpu_to_be_32(meta_m->data));
+ if (meta_v) {
+ enum modify_reg reg;
+ uint32_t value = meta_v->data;
+ uint32_t mask = meta_m->data;
+
+ reg = flow_dv_get_metadata_reg(dev, attr, NULL);
+ if (reg < 0)
+ return;
+ /*
+ * In datapath code there is no endianness
+ * coversions for perfromance reasons, all
+ * pattern conversions are done in rte_flow.
+ */
+ value = rte_cpu_to_be_32(value);
+ mask = rte_cpu_to_be_32(mask);
+ if (reg == REG_C_0) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t msk_c0 = priv->sh->dv_regc0_mask;
+ uint32_t shl_c0 = rte_bsf32(msk_c0);
+
+ msk_c0 = rte_cpu_to_be_32(msk_c0);
+ value <<= shl_c0;
+ mask <<= shl_c0;
+ assert(msk_c0);
+ assert(!(~msk_c0 & mask));
+ }
+ flow_dv_match_meta_reg(matcher, key, reg, value, mask);
+ }
}
/**
@@ -6330,6 +6541,14 @@ struct field_modify_info modify_tcp[] = {
dev_flow->dv.actions[actions_n++] =
dev_flow->dv.tag_resource->action;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_META:
+ if (flow_dv_convert_action_set_meta
+ (dev, &mhdr_res, attr,
+ (const struct rte_flow_action_set_meta *)
+ actions->conf, error))
+ return -rte_errno;
+ action_flags |= MLX5_FLOW_ACTION_SET_META;
+ break;
case RTE_FLOW_ACTION_TYPE_SET_TAG:
if (flow_dv_convert_action_set_tag
(dev, &mhdr_res,
@@ -6784,8 +7003,8 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_ITEM_MARK;
break;
case RTE_FLOW_ITEM_TYPE_META:
- flow_dv_translate_item_meta(match_mask, match_value,
- items);
+ flow_dv_translate_item_meta(dev, match_mask,
+ match_value, attr, items);
last_item = MLX5_FLOW_ITEM_METADATA;
break;
case RTE_FLOW_ITEM_TYPE_ICMP:
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (14 preceding siblings ...)
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
@ 2019-11-07 17:10 ` Viacheslav Ovsiienko
2019-11-25 14:24 ` David Marchand
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
` (3 subsequent siblings)
19 siblings, 1 reply; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:10 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
This patch moves metadata from completion descriptor
to appropriate dynamic mbuf field.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_prm.h | 6 ++++--
drivers/net/mlx5/mlx5_rxtx.c | 5 +++++
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +++++++++++++++++++++++--
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +++++++++++++++++++++++----
5 files changed, 78 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b405cb6..a0c37c8 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -357,12 +357,14 @@ struct mlx5_cqe {
uint16_t hdr_type_etc;
uint16_t vlan_info;
uint8_t lro_num_seg;
- uint8_t rsvd3[11];
+ uint8_t rsvd3[3];
+ uint32_t flow_table_metadata;
+ uint8_t rsvd4[4];
uint32_t byte_cnt;
uint64_t timestamp;
uint32_t sop_drop_qpn;
uint16_t wqe_counter;
- uint8_t rsvd4;
+ uint8_t rsvd5;
uint8_t op_own;
};
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 887e283..f28a909 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -26,6 +26,7 @@
#include <rte_branch_prediction.h>
#include <rte_ether.h>
#include <rte_cycles.h>
+#include <rte_flow.h>
#include "mlx5.h"
#include "mlx5_utils.h"
@@ -1251,6 +1252,10 @@ enum mlx5_txcmp_code {
pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
}
}
+ if (rte_flow_dynf_metadata_avail() && cqe->flow_table_metadata) {
+ pkt->ol_flags |= PKT_RX_DYNF_METADATA;
+ *RTE_FLOW_DYNF_METADATA(pkt) = cqe->flow_table_metadata;
+ }
if (rxq->csum)
pkt->ol_flags |= rxq_cq_to_ol_flags(cqe);
if (rxq->vlan_strip &&
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 3be3a6d..8e79883 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -416,7 +416,6 @@
vec_cmpeq((vector unsigned int)flow_tag,
(vector unsigned int)pinfo_ft_mask)));
}
-
/*
* Merge the two fields to generate the following:
* bit[1] = l3_ok
@@ -1011,7 +1010,29 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
-
+ if (rte_flow_dynf_metadata_avail()) {
+ uint64_t flag = rte_flow_dynf_metadata_mask;
+ int offs = rte_flow_dynf_metadata_offs;
+ uint32_t metadata;
+
+ /* This code is subject for futher optimization. */
+ metadata = cq[pos].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos], offs, uint32_t *) =
+ metadata;
+ pkts[pos]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 1].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 1], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 1]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 2].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 2], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 2]->ol_flags |= metadata ? flag : 0ULL;
+ metadata = cq[pos + 3].flow_table_metadata;
+ *RTE_MBUF_DYNFIELD(pkts[pos + 3], offs, uint32_t *) =
+ metadata;
+ pkts[pos + 3]->ol_flags |= metadata ? flag : 0ULL;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vec_perm(op_own, zero, len_shuf_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index e914d01..86785c7 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -687,6 +687,29 @@
container_of(p3, struct mlx5_cqe,
pkt_info)->timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(elts[pos]) =
+ container_of(p0, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 1]) =
+ container_of(p1, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 2]) =
+ container_of(p2, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(elts[pos + 3]) =
+ container_of(p3, struct mlx5_cqe,
+ pkt_info)->flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos]))
+ elts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 1]))
+ elts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 2]))
+ elts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(elts[pos + 3]))
+ elts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = vbic_u16(byte_cnt, invalid_mask);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index ca8ed41..35b7761 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -537,8 +537,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].csum);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x30);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd4[2]);
cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x04);
cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -564,8 +564,8 @@
cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].csum);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x30);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x30);
- cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd3[9]);
- cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd3[9]);
+ cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd4[2]);
+ cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd4[2]);
cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x04);
cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x04);
/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -640,6 +640,25 @@
pkts[pos + 3]->timestamp =
rte_be_to_cpu_64(cq[pos + p3].timestamp);
}
+ if (rte_flow_dynf_metadata_avail()) {
+ /* This code is subject for futher optimization. */
+ *RTE_FLOW_DYNF_METADATA(pkts[pos]) =
+ cq[pos].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 1]) =
+ cq[pos + p1].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 2]) =
+ cq[pos + p2].flow_table_metadata;
+ *RTE_FLOW_DYNF_METADATA(pkts[pos + 3]) =
+ cq[pos + p3].flow_table_metadata;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos]))
+ pkts[pos]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 1]))
+ pkts[pos + 1]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 2]))
+ pkts[pos + 2]->ol_flags |= PKT_RX_DYNF_METADATA;
+ if (*RTE_FLOW_DYNF_METADATA(pkts[pos + 3]))
+ pkts[pos + 3]->ol_flags |= PKT_RX_DYNF_METADATA;
+ }
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Add up received bytes count. */
byte_cnt = _mm_shuffle_epi8(op_own, len_shuf_mask);
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 17/19] net/mlx5: introduce flow splitters chain
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (15 preceding siblings ...)
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
@ 2019-11-07 17:10 ` Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
` (2 subsequent siblings)
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:10 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika
The mlx5 hardware has some limitations and flow might
require to be split into multiple internal subflows.
For example this is needed to provide the meter object
sharing between multiple flows or to provide metadata
register copying before final queue/rss action.
The multiple features might require several level of
splitting. For example, hairpin feature splits the
original flow into two ones - rx and tx parts. Then
RSS feature should split rx part into multiple subflows
with extended item sets. Then, metering feature might
require splitting each RSS subflow into meter jump
chain, and then metadata extensive support might
require the final subflows splitting. So, we have
to organize the chain of splitting subroutines to
abstract each level of splitting.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 116 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 106 insertions(+), 10 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b87657a..d97a0b2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2786,6 +2786,103 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * The last stage of splitting chain, just creates the subflow
+ * without any modification.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in, out] sub_flow
+ * Pointer to return the created subflow, may be NULL.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_inner(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ struct mlx5_flow **sub_flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_flow *dev_flow;
+
+ dev_flow = flow_drv_prepare(flow, attr, items, actions, error);
+ if (!dev_flow)
+ return -rte_errno;
+ dev_flow->flow = flow;
+ dev_flow->external = external;
+ /* Subflow object was created, we must include one in the list. */
+ LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
+ if (sub_flow)
+ *sub_flow = dev_flow;
+ return flow_drv_translate(dev, dev_flow, attr, items, actions, error);
+}
+
+/**
+ * Split the flow to subflow set. The splitters might be linked
+ * in the chain, like this:
+ * flow_create_split_outer() calls:
+ * flow_create_split_meter() calls:
+ * flow_create_split_metadata(meter_subflow_0) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ * flow_create_split_metadata(meter_subflow_1) calls:
+ * flow_create_split_inner(metadata_subflow_0)
+ * flow_create_split_inner(metadata_subflow_1)
+ * flow_create_split_inner(metadata_subflow_2)
+ *
+ * This provide flexible way to add new levels of flow splitting.
+ * The all of successfully created subflows are included to the
+ * parent flow dev_flow list.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_outer(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ int ret;
+
+ ret = flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ assert(ret <= 0);
+ return ret;
+}
+
+/**
* Create a flow and add it to @p list.
*
* @param dev
@@ -2903,16 +3000,15 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
buf->entry[0].pattern = (void *)(uintptr_t)items;
}
for (i = 0; i < buf->entries; ++i) {
- dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
- p_actions_rx, error);
- if (!dev_flow)
- goto error;
- dev_flow->flow = flow;
- dev_flow->external = external;
- LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
- ret = flow_drv_translate(dev, dev_flow, attr,
- buf->entry[i].pattern,
- p_actions_rx, error);
+ /*
+ * The splitter may create multiple dev_flows,
+ * depending on configuration. In the simplest
+ * case it just creates unmodified original flow.
+ */
+ ret = flow_create_split_outer(dev, flow, attr,
+ buf->entry[i].pattern,
+ p_actions_rx, external,
+ error);
if (ret < 0)
goto error;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 18/19] net/mlx5: split Rx flows to provide metadata copy
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (16 preceding siblings ...)
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
@ 2019-11-07 17:10 ` Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
2019-11-07 22:46 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Raslan Darawsheh
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:10 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
Values set by MARK and SET_META actions should be carried over
to the VF representor in case of flow miss on Tx path. However,
as not all metadata registers are preserved across the different
domains (NIC Rx/Tx and E-Switch FDB), as a workaround, those
values should be carried by reg_c's which are preserved across
domains and copied to STE flow_tag (MARK) and reg_b (META) fields
in the last stage of flow steering, in order to scatter those
values to flow_tag and flow_table_metadata of CQE.
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it should
be manually set by a flow per MARK ID. For this purpose, there
should be a dedicated flow table - RX_CP_TBL and all the Rx flow
should pass by the table to properly copy values.
As the last action of Rx flow steering must be a terminal action
such as QUEUE, RSS or DROP, if a user flow has Q/RSS action, the
flow must be split in order to pass by the RX_CP_TBL. And the
remained Q/RSS action will be performed by another dedicated
action table - RX_ACT_TBL.
For example, for an ingress flow:
pattern,
actions_having_QRSS
it must be split into two flows. The first one is,
pattern,
actions_except_QRSS / copy (reg_c[2] := flow_id) / jump to RX_CP_TBL
and the second one in RX_ACT_TBL.
(if reg_c[2] == flow_id),
action_QRSS
where flow_id is uniquely allocated and managed identifier.
This patch implements the Rx flow splitting and build the RX_ACT_TBL.
Also, per each egress flow on NIC Tx, a copy action (reg_c[]= reg_a)
should be added in order to transfer metadata from WQE.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 8 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.c | 428 ++++++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 1 +
4 files changed, 436 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index fb7b94b..6359bc9 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2411,6 +2411,12 @@ struct mlx5_flow_id_pool *
err = mlx5_alloc_shared_dr(priv);
if (err)
goto error;
+ priv->qrss_id_pool = mlx5_flow_id_pool_alloc();
+ if (!priv->qrss_id_pool) {
+ DRV_LOG(ERR, "can't create flow id pool");
+ err = ENOMEM;
+ goto error;
+ }
}
/* Supported Verbs flow priority number detection. */
err = mlx5_flow_discover_priorities(eth_dev);
@@ -2463,6 +2469,8 @@ struct mlx5_flow_id_pool *
close(priv->nl_socket_rdma);
if (priv->vmwa_context)
mlx5_vlan_vmwa_exit(priv->vmwa_context);
+ if (priv->qrss_id_pool)
+ mlx5_flow_id_pool_release(priv->qrss_id_pool);
if (own_domain_id)
claim_zero(rte_eth_switch_domain_free(priv->domain_id));
rte_free(priv);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 92d445a..9c1a88a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -733,6 +733,7 @@ struct mlx5_priv {
uint32_t nl_sn; /* Netlink message sequence number. */
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
+ struct mlx5_flow_id_pool *qrss_id_pool;
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d97a0b2..2f6ace0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2222,6 +2222,49 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Allocate unique ID for the split Q/RSS subflows. */
+static uint32_t
+flow_qrss_get_id(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t qrss_id, ret;
+
+ ret = mlx5_flow_id_get(priv->qrss_id_pool, &qrss_id);
+ if (ret)
+ return 0;
+ assert(qrss_id);
+ return qrss_id;
+}
+
+/* Free unique ID for the split Q/RSS subflows. */
+static void
+flow_qrss_free_id(struct rte_eth_dev *dev, uint32_t qrss_id)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ if (qrss_id)
+ mlx5_flow_id_release(priv->qrss_id_pool, qrss_id);
+}
+
+/**
+ * Release resource related QUEUE/RSS action split.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Flow to release id's from.
+ */
+static void
+flow_mreg_split_qrss_release(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow *dev_flow;
+
+ LIST_FOREACH(dev_flow, &flow->dev_flows, next)
+ if (dev_flow->qrss_id)
+ flow_qrss_free_id(dev, dev_flow->qrss_id);
+}
+
static int
flow_null_validate(struct rte_eth_dev *dev __rte_unused,
const struct rte_flow_attr *attr __rte_unused,
@@ -2511,6 +2554,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
const struct mlx5_flow_driver_ops *fops;
enum mlx5_flow_drv_type type = flow->drv_type;
+ flow_mreg_split_qrss_release(dev, flow);
assert(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX);
fops = flow_get_drv_ops(type);
fops->destroy(dev, flow);
@@ -2581,6 +2625,41 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Get QUEUE/RSS action from the action list.
+ *
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] qrss
+ * Pointer to the return pointer.
+ * @param[out] qrss_type
+ * Pointer to the action type to return. RTE_FLOW_ACTION_TYPE_END is returned
+ * if no QUEUE/RSS is found.
+ *
+ * @return
+ * Total number of actions.
+ */
+static int
+flow_parse_qrss_action(const struct rte_flow_action actions[],
+ const struct rte_flow_action **qrss)
+{
+ int actions_n = 0;
+
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ *qrss = actions;
+ break;
+ default:
+ break;
+ }
+ actions_n++;
+ }
+ /* Count RTE_FLOW_ACTION_TYPE_END. */
+ return actions_n + 1;
+}
+
+/**
* Check if the flow should be splited due to hairpin.
* The reason for the split is that in current HW we can't
* support encap on Rx, so if a flow have encap we move it
@@ -2832,6 +2911,351 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
}
/**
+ * Split action list having QUEUE/RSS for metadata register copy.
+ *
+ * Once Q/RSS action is detected in user's action list, the flow action
+ * should be split in order to copy metadata registers, which will happen in
+ * RX_CP_TBL like,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * The Q/RSS action will be performed on RX_ACT_TBL after passing by RX_CP_TBL.
+ * This is because the last action of each flow must be a terminal action
+ * (QUEUE, RSS or DROP).
+ *
+ * Flow ID must be allocated to identify actions in the RX_ACT_TBL and it is
+ * stored and kept in the mlx5_flow structure per each sub_flow.
+ *
+ * The Q/RSS action is replaced with,
+ * - SET_TAG, setting the allocated flow ID to reg_c[2].
+ * And the following JUMP action is added at the end,
+ * - JUMP, to RX_CP_TBL.
+ *
+ * A flow to perform remained Q/RSS action will be created in RX_ACT_TBL by
+ * flow_create_split_metadata() routine. The flow will look like,
+ * - If flow ID matches (reg_c[2]), perform Q/RSS.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] split_actions
+ * Pointer to store split actions to jump to CP_TBL.
+ * @param[in] actions
+ * Pointer to the list of original flow actions.
+ * @param[in] qrss
+ * Pointer to the Q/RSS action.
+ * @param[in] actions_n
+ * Number of original actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * non-zero unique flow_id on success, otherwise 0 and
+ * error/rte_error are set.
+ */
+static uint32_t
+flow_mreg_split_qrss_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *split_actions,
+ const struct rte_flow_action *actions,
+ const struct rte_flow_action *qrss,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_rte_flow_action_set_tag *set_tag;
+ struct rte_flow_action_jump *jump;
+ const int qrss_idx = qrss - actions;
+ uint32_t flow_id;
+ int ret = 0;
+
+ /*
+ * Given actions will be split
+ * - Replace QUEUE/RSS action with SET_TAG to set flow ID.
+ * - Add jump to mreg CP_TBL.
+ * As a result, there will be one more action.
+ */
+ ++actions_n;
+ /*
+ * Allocate the new subflow ID. This one is unique within
+ * device and not shared with representors. Otherwise,
+ * we would have to resolve multi-thread access synch
+ * issue. Each flow on the shared device is appended
+ * with source vport identifier, so the resulting
+ * flows will be unique in the shared (by master and
+ * representors) domain even if they have coinciding
+ * IDs.
+ */
+ flow_id = flow_qrss_get_id(dev);
+ if (!flow_id)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "can't allocate id "
+ "for split Q/RSS subflow");
+ /* Internal SET_TAG action to set flow ID. */
+ set_tag = (void *)(split_actions + actions_n);
+ *set_tag = (struct mlx5_rte_flow_action_set_tag){
+ .data = flow_id,
+ };
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ return ret;
+ set_tag->id = ret;
+ /* JUMP action to jump to mreg copy table (CP_TBL). */
+ jump = (void *)(set_tag + 1);
+ *jump = (struct rte_flow_action_jump){
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ };
+ /* Construct new actions array. */
+ memcpy(split_actions, actions, sizeof(*split_actions) * actions_n);
+ /* Replace QUEUE/RSS action. */
+ split_actions[qrss_idx] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ .conf = set_tag,
+ };
+ split_actions[actions_n - 2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = jump,
+ };
+ split_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return flow_id;
+}
+
+/**
+ * Extend the given action list for Tx metadata copy.
+ *
+ * Copy the given action list to the ext_actions and add flow metadata register
+ * copy action in order to copy reg_a set by WQE to reg_c[0].
+ *
+ * @param[out] ext_actions
+ * Pointer to the extended action list.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[in] actions_n
+ * Number of actions in the list.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_mreg_tx_copy_prep(struct rte_eth_dev *dev,
+ struct rte_flow_action *ext_actions,
+ const struct rte_flow_action *actions,
+ int actions_n, struct rte_flow_error *error)
+{
+ struct mlx5_flow_action_copy_mreg *cp_mreg =
+ (struct mlx5_flow_action_copy_mreg *)
+ (ext_actions + actions_n + 1);
+ int ret;
+
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->dst = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_TX, 0, error);
+ if (ret < 0)
+ return ret;
+ cp_mreg->src = ret;
+ memcpy(ext_actions, actions,
+ sizeof(*ext_actions) * actions_n);
+ ext_actions[actions_n - 1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = cp_mreg,
+ };
+ ext_actions[actions_n] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ return 0;
+}
+
+/**
+ * The splitting for metadata feature.
+ *
+ * - Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ *
+ * - All the actions on NIC Tx should have a mreg copy action to
+ * copy reg_a from WQE to reg_c[0].
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[in] flow
+ * Parent flow structure pointer.
+ * @param[in] attr
+ * Flow rule attributes.
+ * @param[in] items
+ * Pattern specification (list terminated by the END pattern item).
+ * @param[in] actions
+ * Associated actions (list terminated by the END action).
+ * @param[in] external
+ * This flow rule is created by request external to PMD.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ * @return
+ * 0 on success, negative value otherwise
+ */
+static int
+flow_create_split_metadata(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ const struct rte_flow_action *qrss = NULL;
+ struct rte_flow_action *ext_actions = NULL;
+ struct mlx5_flow *dev_flow = NULL;
+ uint32_t qrss_id = 0;
+ size_t act_size;
+ int actions_n;
+ int ret;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev))
+ return flow_create_split_inner(dev, flow, NULL, attr, items,
+ actions, external, error);
+ actions_n = flow_parse_qrss_action(actions, &qrss);
+ if (qrss) {
+ /* Exclude hairpin flows from splitting. */
+ if (qrss->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *queue;
+
+ queue = qrss->conf;
+ if (mlx5_rxq_get_type(dev, queue->index) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ } else if (qrss->type == RTE_FLOW_ACTION_TYPE_RSS) {
+ const struct rte_flow_action_rss *rss;
+
+ rss = qrss->conf;
+ if (mlx5_rxq_get_type(dev, rss->queue[0]) ==
+ MLX5_RXQ_TYPE_HAIRPIN)
+ qrss = NULL;
+ }
+ }
+ if (qrss) {
+ /*
+ * Q/RSS action on NIC Rx should be split in order to pass by
+ * the mreg copy table (RX_CP_TBL) and then it jumps to the
+ * action table (RX_ACT_TBL) which has the split Q/RSS action.
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct rte_flow_action_set_tag) +
+ sizeof(struct rte_flow_action_jump);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /*
+ * Create the new actions list with removed Q/RSS action
+ * and appended set tag and jump to register copy table
+ * (RX_CP_TBL). We should preallocate unique tag ID here
+ * in advance, because it is needed for set tag action.
+ */
+ qrss_id = flow_mreg_split_qrss_prep(dev, ext_actions, actions,
+ qrss, actions_n, error);
+ if (!qrss_id) {
+ ret = -rte_errno;
+ goto exit;
+ }
+ } else if (attr->egress && !attr->transfer) {
+ /*
+ * All the actions on NIC Tx should have a metadata register
+ * copy action to copy reg_a from WQE to reg_c[meta]
+ */
+ act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
+ sizeof(struct mlx5_flow_action_copy_mreg);
+ ext_actions = rte_zmalloc(__func__, act_size, 0);
+ if (!ext_actions)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no memory to split "
+ "metadata flow");
+ /* Create the action list appended with copy register. */
+ ret = flow_mreg_tx_copy_prep(dev, ext_actions, actions,
+ actions_n, error);
+ if (ret < 0)
+ goto exit;
+ }
+ /* Add the unmodified original or prefix subflow. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow, attr, items,
+ ext_actions ? ext_actions : actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ if (qrss_id) {
+ const struct rte_flow_attr q_attr = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ .ingress = 1,
+ };
+ /* Internal PMD action to set register. */
+ struct mlx5_rte_flow_item_tag q_tag_spec = {
+ .data = qrss_id,
+ .id = 0,
+ };
+ struct rte_flow_item q_items[] = {
+ {
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &q_tag_spec,
+ .last = NULL,
+ .mask = NULL,
+ },
+ {
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ };
+ struct rte_flow_action q_actions[] = {
+ {
+ .type = qrss->type,
+ .conf = qrss->conf,
+ },
+ {
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ },
+ };
+ uint64_t hash_fields = dev_flow->hash_fields;
+ /*
+ * Put unique id in prefix flow due to it is destroyed after
+ * prefix flow and id will be freed after there is no actual
+ * flows with this id and identifier reallocation becomes
+ * possible (for example, for other flows in other threads).
+ */
+ dev_flow->qrss_id = qrss_id;
+ qrss_id = 0;
+ dev_flow = NULL;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_COPY_MARK, 0, error);
+ if (ret < 0)
+ goto exit;
+ q_tag_spec.id = ret;
+ /* Add suffix subflow to execute Q/RSS. */
+ ret = flow_create_split_inner(dev, flow, &dev_flow,
+ &q_attr, q_items, q_actions,
+ external, error);
+ if (ret < 0)
+ goto exit;
+ assert(dev_flow);
+ dev_flow->hash_fields = hash_fields;
+ }
+
+exit:
+ /*
+ * We do not destroy the partially created sub_flows in case of error.
+ * These ones are included into parent flow list and will be destroyed
+ * by flow_drv_destroy.
+ */
+ flow_qrss_free_id(dev, qrss_id);
+ rte_free(ext_actions);
+ return ret;
+}
+
+/**
* Split the flow to subflow set. The splitters might be linked
* in the chain, like this:
* flow_create_split_outer() calls:
@@ -2876,8 +3300,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
{
int ret;
- ret = flow_create_split_inner(dev, flow, NULL, attr, items,
- actions, external, error);
+ ret = flow_create_split_metadata(dev, flow, attr, items,
+ actions, external, error);
assert(ret <= 0);
return ret;
}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index ef16aef..c71938b 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -500,6 +500,7 @@ struct mlx5_flow {
#endif
struct mlx5_flow_verbs verbs;
};
+ uint32_t qrss_id; /**< Uniqie Q/RSS suffix subflow tag. */
bool external; /**< true if the flow is created external to PMD. */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 19/19] net/mlx5: add metadata register copy table
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (17 preceding siblings ...)
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
@ 2019-11-07 17:10 ` Viacheslav Ovsiienko
2019-11-07 22:46 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Raslan Darawsheh
19 siblings, 0 replies; 64+ messages in thread
From: Viacheslav Ovsiienko @ 2019-11-07 17:10 UTC (permalink / raw)
To: dev; +Cc: matan, rasland, thomas, orika, Yongseok Koh
While reg_c[meta] can be copied to reg_b simply by modify-header
action (it is supported by hardware), it is not possible to copy
reg_c[mark] to the STE flow_tag as flow_tag is not a metadata
register and this is not supported by hardware. Instead, it
should be manually set by a flow per each unique MARK ID. For
this purpose, there should be a dedicated flow table -
RX_CP_TBL and all the Rx flow should pass by the table
to properly copy values from the register to flow tag field.
And for each MARK action, a copy flow should be added
to RX_CP_TBL according to the MARK ID like:
(if reg_c[mark] == mark_id),
flow_tag := mark_id / reg_b := reg_c[meta] / jump to RX_ACT_TBL
For SET_META action, there can be only one default flow like:
reg_b := reg_c[meta] / jump to RX_ACT_TBL
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 15 ++
drivers/net/mlx5/mlx5.h | 7 +-
drivers/net/mlx5/mlx5_defs.h | 4 +
drivers/net/mlx5/mlx5_flow.c | 443 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 19 ++
drivers/net/mlx5/mlx5_flow_dv.c | 10 +-
6 files changed, 491 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6359bc9..32e5fe5 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1039,6 +1039,8 @@ struct mlx5_flow_id_pool *
priv->txqs = NULL;
}
mlx5_proc_priv_uninit(dev);
+ if (priv->mreg_cp_tbl)
+ mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL);
mlx5_mprq_free_mp(dev);
mlx5_free_shared_dr(priv);
if (priv->rss_conf.rss_key != NULL)
@@ -2458,9 +2460,22 @@ struct mlx5_flow_id_pool *
goto error;
}
}
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(eth_dev) &&
+ priv->sh->dv_regc0_mask) {
+ priv->mreg_cp_tbl = mlx5_hlist_create(MLX5_FLOW_MREG_HNAME,
+ MLX5_FLOW_MREG_HTABLE_SZ);
+ if (!priv->mreg_cp_tbl) {
+ err = ENOMEM;
+ goto error;
+ }
+ }
return eth_dev;
error:
if (priv) {
+ if (priv->mreg_cp_tbl)
+ mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL);
if (priv->sh)
mlx5_free_shared_dr(priv);
if (priv->nl_socket_route >= 0)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9c1a88a..619590b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -567,8 +567,9 @@ struct mlx5_flow_tbl_resource {
#define MLX5_HAIRPIN_TX_TABLE (UINT16_MAX - 1)
/* Reserve the last two tables for metadata register copy. */
#define MLX5_FLOW_MREG_ACT_TABLE_GROUP (MLX5_MAX_TABLES - 1)
-#define MLX5_FLOW_MREG_CP_TABLE_GROUP \
- (MLX5_FLOW_MREG_ACT_TABLE_GROUP - 1)
+#define MLX5_FLOW_MREG_CP_TABLE_GROUP (MLX5_MAX_TABLES - 2)
+/* Tables for metering splits should be added here. */
+#define MLX5_MAX_TABLES_EXTERNAL (MLX5_MAX_TABLES - 3)
#define MLX5_MAX_TABLES_FDB UINT16_MAX
#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
@@ -734,6 +735,8 @@ struct mlx5_priv {
LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
struct mlx5_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */
struct mlx5_flow_id_pool *qrss_id_pool;
+ struct mlx5_hlist *mreg_cp_tbl;
+ /* Hash table of Rx metadata register copy table. */
#ifndef RTE_ARCH_64
rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index a77c430..0ef532f 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -145,6 +145,10 @@
#define MLX5_XMETA_MODE_META16 1
#define MLX5_XMETA_MODE_META32 2
+/* Size of the simple hash table for metadata register table. */
+#define MLX5_FLOW_MREG_HTABLE_SZ 4096
+#define MLX5_FLOW_MREG_HNAME "MARK_COPY_TABLE"
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2f6ace0..9ef7f7d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -671,7 +671,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ /*
+ * To support metadata register copy on Tx loopback,
+ * this must be always enabled (metadata may arive
+ * from other port - not from local flows only.
+ */
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->rxq.mark = 1;
rxq_ctrl->flow_mark_n++;
}
@@ -735,7 +745,12 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
container_of((*priv->rxqs)[idx],
struct mlx5_rxq_ctrl, rxq);
- if (mark) {
+ if (priv->config.dv_flow_en &&
+ priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
+ mlx5_flow_ext_mreg_supported(dev)) {
+ rxq_ctrl->rxq.mark = 1;
+ rxq_ctrl->flow_mark_n = 1;
+ } else if (mark) {
rxq_ctrl->flow_mark_n--;
rxq_ctrl->rxq.mark = !!rxq_ctrl->flow_mark_n;
}
@@ -2731,6 +2746,398 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/* Declare flow create/destroy prototype in advance. */
+static struct rte_flow *
+flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ bool external, struct rte_flow_error *error);
+
+static void
+flow_list_destroy(struct rte_eth_dev *dev, struct mlx5_flows *list,
+ struct rte_flow *flow);
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * As mark_id is unique, if there's already a registered flow for the mark_id,
+ * return by increasing the reference counter of the resource. Otherwise, create
+ * the resource (mcp_res) and flow.
+ *
+ * Flow looks like,
+ * - If ingress port is ANY and reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For default flow (zero mark_id), flow is like,
+ * - If ingress port is ANY,
+ * reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param mark_id
+ * ID of MARK action, zero means default flow for META.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * Associated resource on success, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_flow_mreg_copy_resource *
+flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct rte_flow_attr attr = {
+ .group = MLX5_FLOW_MREG_CP_TABLE_GROUP,
+ .ingress = 1,
+ };
+ struct mlx5_rte_flow_item_tag tag_spec = {
+ .data = mark_id,
+ };
+ struct rte_flow_item items[] = {
+ [1] = { .type = RTE_FLOW_ITEM_TYPE_END, },
+ };
+ struct rte_flow_action_mark ftag = {
+ .id = mark_id,
+ };
+ struct mlx5_flow_action_copy_mreg cp_mreg = {
+ .dst = REG_B,
+ .src = 0,
+ };
+ struct rte_flow_action_jump jump = {
+ .group = MLX5_FLOW_MREG_ACT_TABLE_GROUP,
+ };
+ struct rte_flow_action actions[] = {
+ [3] = { .type = RTE_FLOW_ACTION_TYPE_END, },
+ };
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ int ret;
+
+ /* Fill the register fileds in the flow. */
+ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error);
+ if (ret < 0)
+ return NULL;
+ tag_spec.id = ret;
+ ret = mlx5_flow_get_reg_id(dev, MLX5_METADATA_RX, 0, error);
+ if (ret < 0)
+ return NULL;
+ cp_mreg.src = ret;
+ /* Check if already registered. */
+ assert(priv->mreg_cp_tbl);
+ mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id);
+ if (mcp_res) {
+ /* For non-default rule. */
+ if (mark_id)
+ mcp_res->refcnt++;
+ assert(mark_id || mcp_res->refcnt == 1);
+ return mcp_res;
+ }
+ /* Provide the full width of FLAG specific value. */
+ if (mark_id == (priv->sh->dv_regc0_mask & MLX5_FLOW_MARK_DEFAULT))
+ tag_spec.data = MLX5_FLOW_MARK_DEFAULT;
+ /* Build a new flow. */
+ if (mark_id) {
+ items[0] = (struct rte_flow_item){
+ .type = MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+ .spec = &tag_spec,
+ };
+ items[1] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_MARK,
+ .conf = &ftag,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[3] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ } else {
+ /* Default rule, wildcard match. */
+ attr.priority = MLX5_FLOW_PRIO_RSVD;
+ items[0] = (struct rte_flow_item){
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ };
+ actions[0] = (struct rte_flow_action){
+ .type = MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
+ .conf = &cp_mreg,
+ };
+ actions[1] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_JUMP,
+ .conf = &jump,
+ };
+ actions[2] = (struct rte_flow_action){
+ .type = RTE_FLOW_ACTION_TYPE_END,
+ };
+ }
+ /* Build a new entry. */
+ mcp_res = rte_zmalloc(__func__, sizeof(*mcp_res), 0);
+ if (!mcp_res) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+ /*
+ * The copy Flows are not included in any list. There
+ * ones are referenced from other Flows and can not
+ * be applied, removed, deleted in ardbitrary order
+ * by list traversing.
+ */
+ mcp_res->flow = flow_list_create(dev, NULL, &attr, items,
+ actions, false, error);
+ if (!mcp_res->flow)
+ goto error;
+ mcp_res->refcnt++;
+ mcp_res->hlist_ent.key = mark_id;
+ ret = mlx5_hlist_insert(priv->mreg_cp_tbl,
+ &mcp_res->hlist_ent);
+ assert(!ret);
+ if (ret)
+ goto error;
+ return mcp_res;
+error:
+ if (mcp_res->flow)
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ rte_free(mcp_res);
+ return NULL;
+}
+
+/**
+ * Release flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_del_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ if (!mcp_res || !priv->mreg_cp_tbl)
+ return;
+ if (flow->copy_applied) {
+ assert(mcp_res->appcnt);
+ flow->copy_applied = 0;
+ --mcp_res->appcnt;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+ }
+ /*
+ * We do not check availability of metadata registers here,
+ * because copy resources are allocated in this case.
+ */
+ if (--mcp_res->refcnt)
+ return;
+ assert(mcp_res->flow);
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent);
+ rte_free(mcp_res);
+ flow->mreg_copy = NULL;
+}
+
+/**
+ * Start flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_start_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+ int ret;
+
+ if (!mcp_res || flow->copy_applied)
+ return 0;
+ if (!mcp_res->appcnt) {
+ ret = flow_drv_apply(dev, mcp_res->flow, NULL);
+ if (ret)
+ return ret;
+ }
+ ++mcp_res->appcnt;
+ flow->copy_applied = 1;
+ return 0;
+}
+
+/**
+ * Stop flow in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @flow
+ * Parent flow for wich copying is provided.
+ */
+static void
+flow_mreg_stop_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow *flow)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res = flow->mreg_copy;
+
+ if (!mcp_res || !flow->copy_applied)
+ return;
+ assert(mcp_res->appcnt);
+ --mcp_res->appcnt;
+ flow->copy_applied = 0;
+ if (!mcp_res->appcnt)
+ flow_drv_remove(dev, mcp_res->flow);
+}
+
+/**
+ * Remove the default copy action from RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+flow_mreg_del_default_copy_action(struct rte_eth_dev *dev)
+{
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ /* Check if default flow is registered. */
+ if (!priv->mreg_cp_tbl)
+ return;
+ mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, 0ULL);
+ if (!mcp_res)
+ return;
+ assert(mcp_res->flow);
+ flow_list_destroy(dev, NULL, mcp_res->flow);
+ mlx5_hlist_remove(priv->mreg_cp_tbl, &mcp_res->hlist_ent);
+ rte_free(mcp_res);
+}
+
+/**
+ * Add the default copy action in in RX_CP_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 for success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_add_default_copy_action(struct rte_eth_dev *dev,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!priv->config.dv_flow_en ||
+ priv->config.dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ mcp_res = flow_mreg_add_copy_action(dev, 0, error);
+ if (!mcp_res)
+ return -rte_errno;
+ return 0;
+}
+
+/**
+ * Add a flow of copying flow metadata registers in RX_CP_TBL.
+ *
+ * All the flow having Q/RSS action should be split by
+ * flow_mreg_split_qrss_prep() to pass by RX_CP_TBL. A flow in the RX_CP_TBL
+ * performs the following,
+ * - CQE->flow_tag := reg_c[1] (MARK)
+ * - CQE->flow_table_metadata (reg_b) := reg_c[0] (META)
+ * As CQE's flow_tag is not a register, it can't be simply copied from reg_c[1]
+ * but there should be a flow per each MARK ID set by MARK action.
+ *
+ * For the aforementioned reason, if there's a MARK action in flow's action
+ * list, a corresponding flow should be added to the RX_CP_TBL in order to copy
+ * the MARK ID to CQE's flow_tag like,
+ * - If reg_c[1] is mark_id,
+ * flow_tag := mark_id, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * For SET_META action which stores value in reg_c[0], as the destination is
+ * also a flow metadata register (reg_b), adding a default flow is enough. Zero
+ * MARK ID means the default flow. The default flow looks like,
+ * - For all flow, reg_b := reg_c[0] and jump to RX_ACT_TBL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param flow
+ * Pointer to flow structure.
+ * @param[in] actions
+ * Pointer to the list of actions.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL.
+ *
+ * @return
+ * 0 on success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_mreg_update_copy_table(struct rte_eth_dev *dev,
+ struct rte_flow *flow,
+ const struct rte_flow_action *actions,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_config *config = &priv->config;
+ struct mlx5_flow_mreg_copy_resource *mcp_res;
+ const struct rte_flow_action_mark *mark;
+
+ /* Check whether extensive metadata feature is engaged. */
+ if (!config->dv_flow_en ||
+ config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
+ !mlx5_flow_ext_mreg_supported(dev) ||
+ !priv->sh->dv_regc0_mask)
+ return 0;
+ /* Find MARK action. */
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ mcp_res = flow_mreg_add_copy_action
+ (dev, MLX5_FLOW_MARK_DEFAULT, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ mark = (const struct rte_flow_action_mark *)
+ actions->conf;
+ mcp_res =
+ flow_mreg_add_copy_action(dev, mark->id, error);
+ if (!mcp_res)
+ return -rte_errno;
+ flow->mreg_copy = mcp_res;
+ if (dev->data->dev_started) {
+ mcp_res->appcnt++;
+ flow->copy_applied = 1;
+ }
+ return 0;
+ default:
+ break;
+ }
+ }
+ return 0;
+}
+
#define MLX5_MAX_SPLIT_ACTIONS 24
#define MLX5_MAX_SPLIT_ITEMS 24
@@ -3454,6 +3861,22 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
if (ret < 0)
goto error;
}
+ /*
+ * Update the metadata register copy table. If extensive
+ * metadata feature is enabled and registers are supported
+ * we might create the extra rte_flow for each unique
+ * MARK/FLAG action ID.
+ *
+ * The table is updated for ingress Flows only, because
+ * the egress Flows belong to the different device and
+ * copy table should be updated in peer NIC Rx domain.
+ */
+ if (attr->ingress &&
+ (external || attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) {
+ ret = flow_mreg_update_copy_table(dev, flow, actions, error);
+ if (ret)
+ goto error;
+ }
if (dev->data->dev_started) {
ret = flow_drv_apply(dev, flow, error);
if (ret < 0)
@@ -3469,6 +3892,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
hairpin_id);
return NULL;
error:
+ assert(flow);
+ flow_mreg_del_copy_action(dev, flow);
ret = rte_errno; /* Save rte_errno before cleanup. */
if (flow->hairpin_flow_id)
mlx5_flow_id_release(priv->sh->flow_id_pool,
@@ -3577,6 +4002,7 @@ struct rte_flow *
flow_drv_destroy(dev, flow);
if (list)
TAILQ_REMOVE(list, flow, next);
+ flow_mreg_del_copy_action(dev, flow);
rte_free(flow->fdir);
rte_free(flow);
}
@@ -3613,8 +4039,11 @@ struct rte_flow *
{
struct rte_flow *flow;
- TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next)
+ TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
flow_drv_remove(dev, flow);
+ flow_mreg_stop_copy_action(dev, flow);
+ }
+ flow_mreg_del_default_copy_action(dev);
flow_rxq_flags_clear(dev);
}
@@ -3636,7 +4065,15 @@ struct rte_flow *
struct rte_flow_error error;
int ret = 0;
+ /* Make sure default copy action (reg_c[0] -> reg_b) is created. */
+ ret = flow_mreg_add_default_copy_action(dev, &error);
+ if (ret < 0)
+ return -rte_errno;
+ /* Apply Flows created by application. */
TAILQ_FOREACH(flow, list, next) {
+ ret = flow_mreg_start_copy_action(dev, flow);
+ if (ret < 0)
+ goto error;
ret = flow_drv_apply(dev, flow, &error);
if (ret < 0)
goto error;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c71938b..560b2b1 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -38,6 +38,7 @@ enum mlx5_rte_flow_item_type {
enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_END = INT_MIN,
MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ MLX5_RTE_FLOW_ACTION_TYPE_MARK,
MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
};
@@ -417,6 +418,21 @@ struct mlx5_flow_dv_push_vlan_action_resource {
rte_be32_t vlan_tag; /**< VLAN tag value. */
};
+/* Metadata register copy table entry. */
+struct mlx5_flow_mreg_copy_resource {
+ /*
+ * Hash list entry for copy table.
+ * - Key is 32/64-bit MARK action ID.
+ * - MUST be the first entry.
+ */
+ struct mlx5_hlist_entry hlist_ent;
+ LIST_ENTRY(mlx5_flow_mreg_copy_resource) next;
+ /* List entry for device flows. */
+ uint32_t refcnt; /* Reference counter. */
+ uint32_t appcnt; /* Apply/Remove counter. */
+ struct rte_flow *flow; /* Built flow for copy. */
+};
+
/*
* Max number of actions per DV flow.
* See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED
@@ -510,10 +526,13 @@ struct rte_flow {
enum mlx5_flow_drv_type drv_type; /**< Driver type. */
struct mlx5_flow_rss rss; /**< RSS context. */
struct mlx5_flow_counter *counter; /**< Holds flow counter. */
+ struct mlx5_flow_mreg_copy_resource *mreg_copy;
+ /**< pointer to metadata register copy table resource. */
LIST_HEAD(dev_flows, mlx5_flow) dev_flows;
/**< Device flows that are part of the flow. */
struct mlx5_fdir *fdir; /**< Pointer to associated FDIR if any. */
uint32_t hairpin_flow_id; /**< The flow id used for hairpin. */
+ uint32_t copy_applied:1; /**< The MARK copy Flow os applied. */
};
typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 60ebbca..f06227c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4086,8 +4086,11 @@ struct field_modify_info modify_tcp[] = {
NULL,
"groups are not supported");
#else
- uint32_t max_group = attributes->transfer ? MLX5_MAX_TABLES_FDB :
- MLX5_MAX_TABLES;
+ uint32_t max_group = attributes->transfer ?
+ MLX5_MAX_TABLES_FDB :
+ external ?
+ MLX5_MAX_TABLES_EXTERNAL :
+ MLX5_MAX_TABLES;
uint32_t table;
int ret;
@@ -4694,6 +4697,7 @@ struct field_modify_info modify_tcp[] = {
MLX5_FLOW_ACTION_DEC_TCP_ACK;
break;
case MLX5_RTE_FLOW_ACTION_TYPE_TAG:
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG:
break;
default:
@@ -6530,6 +6534,8 @@ struct field_modify_info modify_tcp[] = {
action_flags |= MLX5_FLOW_ACTION_MARK_EXT;
break;
}
+ /* Fall-through */
+ case MLX5_RTE_FLOW_ACTION_TYPE_MARK:
/* Legacy (non-extensive) MARK action. */
tag_resource.tag = mlx5_flow_mark_set
(((const struct rte_flow_action_mark *)
--
1.8.3.1
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
` (18 preceding siblings ...)
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
@ 2019-11-07 22:46 ` Raslan Darawsheh
19 siblings, 0 replies; 64+ messages in thread
From: Raslan Darawsheh @ 2019-11-07 22:46 UTC (permalink / raw)
To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Thomas Monjalon, Ori Kam, Yongseok Koh
Hi,
> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> Sent: Thursday, November 7, 2019 7:10 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; Ori
> Kam <orika@mellanox.com>; Yongseok Koh <yskoh@mellanox.com>
> Subject: [PATCH v3 00/19] net/mlx5: implement extensive metadata feature
>
> The modern networks operate on the base of the packet switching
> approach, and in-network environment data are transmitted as the packets.
> Within the host besides the data, actually transmitted on the wire as packets,
> there might some out-of-band data helping to process packets. These data
> are named as metadata, exist on a per-packet basis and are attached to each
> packet as some extra dedicated storage (in meaning it besides the packet
> data itself).
>
> In the DPDK network data are represented as mbuf structure chains and go
> along the application/DPDK datapath. From the other side, DPDK provides
> Flow API to control the flow engine. Being precise, there are two kinds of
> metadata in the DPDK, the one is purely software metadata (as fields of
> mbuf - flags, packet types, data length, etc.), and the other is metadata
> within flow engine.
> In this scope, we cover the second type (flow engine metadata) only.
>
> The flow engine metadata is some extra data, supported on the per-packet
> basis and usually handled by hardware inside flow engine.
>
> Initially, there were proposed two metadata related actions:
>
> - RTE_FLOW_ACTION_TYPE_FLAG
> - RTE_FLOW_ACTION_TYPE_MARK
>
> These actions set the special flag in the packet metadata, MARK action stores
> some specified value in the metadata storage, and, on the packet receiving
> PMD puts the flag and value to the mbuf and applications can see the packet
> was threated inside flow engine according to the appropriate RTE flow(s).
> MARK and FLAG are like some kind of gateway to transfer some per-packet
> information from the flow engine to the application via receiving datapath.
> Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It
> allows us to extend the flow match pattern with the capability to match the
> metadata values set by MARK/FLAG actions on other flows.
>
> From the datapath point of view, the MARK and FLAG are related to the
> receiving side only. It would useful to have the same gateway on the
> transmitting side and there was the feature of type
> RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill the field
> in mbuf and this value will be transferred to some field in the packet
> metadata inside the flow engine.
>
> It did not matter whether these metadata fields are shared because of
> MARK and META items belonged to different domains (receiving and
> transmitting) and could be vendor-specific.
>
> So far, so good, DPDK proposes some entities to control metadata inside the
> flow engine and gateways to exchange these values on a per-packet basis via
> datapath.
>
> As we can see, the MARK and META means are not symmetric, there is
> absent action which would allow us to set META value on the transmitting
> path. So, the action of type:
>
> - RTE_FLOW_ACTION_TYPE_SET_META is proposed.
>
> The next, applications raise the new requirements for packet metadata. The
> flow engines are getting more complex, internal switches are introduced,
> multiple ports might be supported within the same flow engine namespace.
> From the DPDK points of view, it means the packets might be sent on one
> eth_dev port and received on the other one, and the packet path inside the
> flow engine entirely belongs to the same hardware device. The simplest
> example is SR-IOV with PF, VFs and the representors. And there is a brilliant
> opportunity to provide some out-of-band channel to transfer some extra
> data from one port to another one, besides the packet data itself. And
> applications would like to use this opportunity.
>
> Improving the metadata definitions it is proposed to:
> - suppose MARK and META metadata fields not shared, dedicated
> - extend applying area for MARK and META items/actions for all
> flow engine domains - transmitting and receiving
> - allow MARK and META metadata to be preserved while crossing
> the flow domains (from transmit origin through flow database
> inside (E-)switch to receiving side domain), in simple words,
> to allow metadata to convey the packet thought entire flow
> engine space.
>
> Another new proposed feature is transient per-packet storage inside the
> flow engine. It might have a lot of use cases.
> For example, if there is VXLAN tunneled traffic and some flow performs
> VXLAN decapsulation and wishes to save information regarding the dropped
> header it could use this temporary transient storage. The tools to maintain
> this storage are traditional (for DPDK rte_flow API):
>
> - RTE_FLOW_ACTION_TYPE_SET_TAG - to set value
> - RTE_FLOW_ACTION_TYPE_SET_ITEM - to match on
>
> There are primary properties of the proposed storage:
> - the storage is presented as an array of 32-bit opaque values
> - the size of array (or even bitmap of available indices) is
> vendor specific and is subject to run-time trial
> - it is transient, it means it exists only inside flow engine,
> no gateways for interacting with datapath, applications have
> way neither to specify these data on transmitting nor to get
> these data on receiving
>
> This patchset implements the abovementioned extensive metadata feature
> in the mlx5 PMD.
>
> The patchset must be applied after hashed list patch:
>
> [1]
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F62539%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C104e9a22f43a4faace5108d763a55e70%7Ca652971c7d2e4d9ba6
> a4d149256f461b%7C0%7C0%7C637087434230552807&sdata=7xYZEaSke
> MCeBXDz8NRXKw7iMnBFzzf4Fb7%2FjYRJm3g%3D&reserved=0
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>
>
> ---
> v3: - moved missed part from isolated debug commit
> - rebased
>
> v2: -
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fcover%2F62579%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C104e9a22f43a4faace5108d763a55e70%7Ca652971c7d2e4d9ba6
> a4d149256f461b%7C0%7C0%7C637087434230552807&sdata=2f9iQrJtTp9
> HqVAy%2BHANBfco%2BZsCAgd5Y8pI4iLPndE%3D&reserved=0
> - fix: metadata endianess
> - fix: infinite loop in header modify update routine
> - fix: reg_c_3 is reserved for split shared tag
> - fix: vport mask and value endianess
> - hash list implementation removed
> - rebased
>
> v1:
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fcover%2F62419%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C104e9a22f43a4faace5108d763a55e70%7Ca652971c7d2e4d9ba6
> a4d149256f461b%7C0%7C0%7C637087434230552807&sdata=cV5lXlXExtZ
> CUPloJESwjqBlqrNyN4OI6OMUURKP2XI%3D&reserved=0
>
> Viacheslav Ovsiienko (19):
> net/mlx5: convert internal tag endianness
> net/mlx5: update modify header action translator
> net/mlx5: add metadata register copy
> net/mlx5: refactor flow structure
> net/mlx5: update flow functions
> net/mlx5: update meta register matcher set
> net/mlx5: rename structure and function
> net/mlx5: check metadata registers availability
> net/mlx5: add devarg for extensive metadata support
> net/mlx5: adjust shared register according to mask
> net/mlx5: check the maximal modify actions number
> net/mlx5: update metadata register id query
> net/mlx5: add flow tag support
> net/mlx5: extend flow mark support
> net/mlx5: extend flow meta data support
> net/mlx5: add meta data support to Rx datapath
> net/mlx5: introduce flow splitters chain
> net/mlx5: split Rx flows to provide metadata copy
> net/mlx5: add metadata register copy table
>
> doc/guides/nics/mlx5.rst | 49 +
> drivers/net/mlx5/mlx5.c | 150 ++-
> drivers/net/mlx5/mlx5.h | 19 +-
> drivers/net/mlx5/mlx5_defs.h | 8 +
> drivers/net/mlx5/mlx5_ethdev.c | 8 +-
> drivers/net/mlx5/mlx5_flow.c | 1201 ++++++++++++++++++++++-
> drivers/net/mlx5/mlx5_flow.h | 108 ++-
> drivers/net/mlx5/mlx5_flow_dv.c | 1566
> ++++++++++++++++++++++++------
> drivers/net/mlx5/mlx5_flow_verbs.c | 55 +-
> drivers/net/mlx5/mlx5_prm.h | 45 +-
> drivers/net/mlx5/mlx5_rxtx.c | 5 +
> drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 25 +-
> drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 23 +
> drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 27 +-
> 14 files changed, 2866 insertions(+), 423 deletions(-)
>
> --
> 1.8.3.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
@ 2019-11-25 14:24 ` David Marchand
0 siblings, 0 replies; 64+ messages in thread
From: David Marchand @ 2019-11-25 14:24 UTC (permalink / raw)
To: Viacheslav Ovsiienko
Cc: dev, Matan Azrad, Raslan, Thomas Monjalon, Ori Kam, Yongseok Koh
On Thu, Nov 7, 2019 at 6:13 PM Viacheslav Ovsiienko
<viacheslavo@mellanox.com> wrote:
> @@ -1011,7 +1010,29 @@
> pkts[pos + 3]->timestamp =
> rte_be_to_cpu_64(cq[pos + p3].timestamp);
> }
> -
> + if (rte_flow_dynf_metadata_avail()) {
> + uint64_t flag = rte_flow_dynf_metadata_mask;
> + int offs = rte_flow_dynf_metadata_offs;
IIUC, this patch does not use the helpers from rte_flow.h:
RTE_FLOW_DYNF_METADATA, PKT_RX_DYNF_METADATA.
--
David Marchand
^ permalink raw reply [flat|nested] 64+ messages in thread
end of thread, other threads:[~2019-11-25 14:24 UTC | newest]
Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 8:01 [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 01/20] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 02/20] net/mlx5: update modify header action translator Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 03/20] net/mlx5: add metadata register copy Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 04/20] net/mlx5: refactor flow structure Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 05/20] net/mlx5: update flow functions Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 06/20] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 07/20] net/mlx5: rename structure and function Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 08/20] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 09/20] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 10/20] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 11/20] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 12/20] net/mlx5: update metadata register id query Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 13/20] net/mlx5: add flow tag support Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 14/20] net/mlx5: extend flow mark support Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 15/20] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 16/20] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 17/20] net/mlx5: add simple hash table Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 18/20] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 19/20] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
2019-11-05 8:01 ` [dpdk-dev] [PATCH 20/20] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
2019-11-05 9:35 ` [dpdk-dev] [PATCH 00/20] net/mlx5: implement extensive metadata feature Matan Azrad
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 00/19] " Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
2019-11-06 17:37 ` [dpdk-dev] [PATCH v2 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 01/19] net/mlx5: convert internal tag endianness Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 02/19] net/mlx5: update modify header action translator Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 03/19] net/mlx5: add metadata register copy Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 04/19] net/mlx5: refactor flow structure Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 05/19] net/mlx5: update flow functions Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 06/19] net/mlx5: update meta register matcher set Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 07/19] net/mlx5: rename structure and function Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 08/19] net/mlx5: check metadata registers availability Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 09/19] net/mlx5: add devarg for extensive metadata support Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 10/19] net/mlx5: adjust shared register according to mask Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 11/19] net/mlx5: check the maximal modify actions number Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 12/19] net/mlx5: update metadata register id query Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 13/19] net/mlx5: add flow tag support Viacheslav Ovsiienko
2019-11-07 17:09 ` [dpdk-dev] [PATCH v3 14/19] net/mlx5: extend flow mark support Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 15/19] net/mlx5: extend flow meta data support Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 16/19] net/mlx5: add meta data support to Rx datapath Viacheslav Ovsiienko
2019-11-25 14:24 ` David Marchand
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 17/19] net/mlx5: introduce flow splitters chain Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 18/19] net/mlx5: split Rx flows to provide metadata copy Viacheslav Ovsiienko
2019-11-07 17:10 ` [dpdk-dev] [PATCH v3 19/19] net/mlx5: add metadata register copy table Viacheslav Ovsiienko
2019-11-07 22:46 ` [dpdk-dev] [PATCH v3 00/19] net/mlx5: implement extensive metadata feature Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).