automatic DPDK test reports
 help / color / mirror / Atom feed
* |WARNING| pw118676 [PATCH] [v2] net/mlx5: add port representor item support
@ 2022-10-20  1:38 dpdklab
  0 siblings, 0 replies; only message in thread
From: dpdklab @ 2022-10-20  1:38 UTC (permalink / raw)
  To: test-report; +Cc: dpdk-test-reports

[-- Attachment #1: Type: text/plain, Size: 3264 bytes --]

Test-Label: iol-testing
Test-Status: WARNING
http://dpdk.org/patch/118676

_apply patch failure_

Submitter: Sean Zhang (Networking SW) <xiazhang@nvidia.com>
Date: Thursday, October 20 2022 01:20:27 
Applied on: CommitID:a74b1b25136a592c275afbfa6b70771469750aee
Apply patch set 118676 failed:

Checking patch doc/guides/nics/features/mlx5.ini...
Checking patch drivers/net/mlx5/mlx5_flow.c...
Hunk #1 succeeded at 108 (offset -18 lines).
Hunk #2 succeeded at 565 (offset -18 lines).
Hunk #3 succeeded at 5443 (offset -69 lines).
Hunk #4 succeeded at 6103 (offset -69 lines).
Hunk #5 succeeded at 6916 (offset -70 lines).
Hunk #6 succeeded at 6930 (offset -70 lines).
Hunk #7 succeeded at 7019 (offset -70 lines).
Checking patch drivers/net/mlx5/mlx5_flow_dv.c...
Hunk #1 succeeded at 7059 (offset -130 lines).
error: while searching for:
			     mlx5_flow_get_thread_workspace())->rss_desc,
	};
	struct mlx5_dv_matcher_workspace wks_m = wks;
	int ret = 0;
	int tunnel;


error: patch failed: drivers/net/mlx5/mlx5_flow_dv.c:13560
error: while searching for:
						  RTE_FLOW_ERROR_TYPE_ITEM,
						  NULL, "item not supported");
		tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL);
		switch (items->type) {
		case RTE_FLOW_ITEM_TYPE_CONNTRACK:
			flow_dv_translate_item_aso_ct(dev, match_mask,
						      match_value, items);

error: patch failed: drivers/net/mlx5/mlx5_flow_dv.c:13569
error: while searching for:
			wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
						 MLX5_FLOW_ITEM_OUTER_FLEX;
			break;
		default:
			ret = flow_dv_translate_items(dev, items, &wks_m,
				match_mask, MLX5_SET_MATCHER_SW_M, error);

error: patch failed: drivers/net/mlx5/mlx5_flow_dv.c:13581
Applied patch doc/guides/nics/features/mlx5.ini cleanly.
Applied patch drivers/net/mlx5/mlx5_flow.c cleanly.
Applying patch drivers/net/mlx5/mlx5_flow_dv.c with 3 rejects...
Hunk #1 applied cleanly.
Rejected hunk #2.
Rejected hunk #3.
Rejected hunk #4.
diff a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c	(rejected hunks)
@@ -13560,6 +13561,7 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev,
 			     mlx5_flow_get_thread_workspace())->rss_desc,
 	};
 	struct mlx5_dv_matcher_workspace wks_m = wks;
+	int item_type;
 	int ret = 0;
 	int tunnel;
 
@@ -13569,7 +13571,8 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
 						  NULL, "item not supported");
 		tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL);
-		switch (items->type) {
+		item_type = items->type;
+		switch (item_type) {
 		case RTE_FLOW_ITEM_TYPE_CONNTRACK:
 			flow_dv_translate_item_aso_ct(dev, match_mask,
 						      match_value, items);
@@ -13581,6 +13584,12 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev,
 			wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
 						 MLX5_FLOW_ITEM_OUTER_FLEX;
 			break;
+		case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
+			flow_dv_translate_item_sq(match_value, items,
+						  MLX5_SET_MATCHER_SW_V);
+			flow_dv_translate_item_sq(match_mask, items,
+						  MLX5_SET_MATCHER_SW_M);
+			break;
 		default:
 			ret = flow_dv_translate_items(dev, items, &wks_m,
 				match_mask, MLX5_SET_MATCHER_SW_M, error);

https://lab.dpdk.org/results/dashboard/patchsets/24090/

UNH-IOL DPDK Community Lab

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-10-20  1:38 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-20  1:38 |WARNING| pw118676 [PATCH] [v2] net/mlx5: add port representor item support dpdklab

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).