From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 061C243204; Thu, 26 Oct 2023 09:14:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2CE542DE9; Thu, 26 Oct 2023 09:13:52 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2084.outbound.protection.outlook.com [40.107.223.84]) by mails.dpdk.org (Postfix) with ESMTP id 9A42A42E01 for ; Thu, 26 Oct 2023 09:13:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bhifdNcc8ybsJiutouIw6moFd1+HmURNwznxaY69TMeK902vlOUPciuRIvJr5Sv9QorwXQLJ9Bs4vlJnSUk76Tw7cuD9HvYcC/KYjDNU6FDS1T9mJVF37HqVm7BTpYhYoypubM8Tz0k53NcS3u36tBzEfICWtzk0jAO55qV5AFr9crdcHmC9vLdSsNn54q27Y5utb+BbiBkVZqeWJrXYIu5MhI6GXB778qR8Fu8rH1/NSaO2JVuWbohmXJEBxHkTkmnmcHYoPYTZB4LZFmWEA8csaph29OS2wTbrMkQxW3F/CNDKZEC9paQy9HCcUw2Y1dz65MqzaQCsfmL54jS/9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PQMbEeevRUowfyU2ZKjtkE1DqITDyogHc9HD+J2JMvA=; b=Up5pZx8bUHOc7bqxH0X5oUixEC/g8dRPrOa9dWVhOz3ZKHaQyrsFTKfWdhqZROHSYxc7ZpkVE6GE04NWmC8UMHXrcFu6FFXpuPCNmyfM4Ir9Lj5Kb2c+UFvDMqOSQR7Y+xJedZClRs/qGZHPukbE15y6sDDj0LczIcuK9HhYI4351hFzuWm6t8rJ97/uGaVbBcjGl+Ug24AG9Mln870wUi4Uty0cMpV0lI1DhWbOohTTYLvtvsyuI3FlUBEoHch5FVejnjFhMx8OrIEucvfptGPDIpRrWqu6IVGvZASXOWcDr3S/8MS1YWsRKUs0b+XkMCGjNAEji7uooe3d4u/YfA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PQMbEeevRUowfyU2ZKjtkE1DqITDyogHc9HD+J2JMvA=; b=RIThW2w/b6ZtuT1Bz/8FJHgfdfdR67cLj0sL7aCZTMyGWb/UtiSyxVdl6wpmn4ZTYoPRcWWUCqt+BL5vtY3aebNwSXlAfTTWLWe/KvOWTWAzH4USRzOnpf9jIv6cb1CVVls6J0hrwDc6uGMgzliiVGch+lnArvix3ZfwMFH5Iv/rhTnSgNjEiQdKKlZMZCzcMmm6BCDTWBsaYyR07yC2eURF7wPQngbjiF3W35seksFdi1aGiu2S3Zg0aRUgjUI1W7g03ZK9LHMQ1wGhKBJpZ/OM3y6HF2r0NKI11d5naNynQRa2MlRqA42Z4yCxs435kDXwAIYEuE5/x0pYcq0UsQ== Received: from SJ0PR13CA0119.namprd13.prod.outlook.com (2603:10b6:a03:2c5::34) by CY5PR12MB6429.namprd12.prod.outlook.com (2603:10b6:930:3b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.31; Thu, 26 Oct 2023 07:13:30 +0000 Received: from MWH0EPF000971E8.namprd02.prod.outlook.com (2603:10b6:a03:2c5:cafe::1f) by SJ0PR13CA0119.outlook.office365.com (2603:10b6:a03:2c5::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.8 via Frontend Transport; Thu, 26 Oct 2023 07:13:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by MWH0EPF000971E8.mail.protection.outlook.com (10.167.243.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.15 via Frontend Transport; Thu, 26 Oct 2023 07:13:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 26 Oct 2023 00:13:12 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 26 Oct 2023 00:13:10 -0700 From: Gregory Etelson To: CC: , =?UTF-8?q?=C2=A0?= , , Suanming Mou , Matan Azrad , Viacheslav Ovsiienko , Ori Kam Subject: [PATCH v7 09/10] net/mlx5: reformat HWS code for indirect list actions Date: Thu, 26 Oct 2023 10:12:28 +0300 Message-ID: <20231026071229.300162-10-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231026071229.300162-1-getelson@nvidia.com> References: <20231017080928.30454-1-getelson@nvidia.com> <20231026071229.300162-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000971E8:EE_|CY5PR12MB6429:EE_ X-MS-Office365-Filtering-Correlation-Id: d4351f53-a977-4142-96bb-08dbd5f30e97 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: x3UlQP1pTaMk+PKVR3ZLUo5eYLBEMpUVmZ14W6JWtiHLJMRLmSJ8PIoklfaZJA4qJNm32jMnxW6TSnjAJmodOruaRFQ1556SIve9NnrOzEJxwyMc8BQ0xOMUNdfX2O2uYmhwW1/dkRyliWqT834SmsjWcCLGUTFxIOPp1YWGWNADXsGUiSyXdKmYAP8G5jjWd4MCQWPioEWdWUPxfDxQ96+r2trvr5Tsc9fV89iwV351r/S+15Cyw7/tCzNP2rB4MAi7GPhICiOqWqZdMI0vdk5cHRKKWuWkpBxK9F+5MFNGIO+2aromQoHTdItepL9LM+Kw+b0VmCuowpx7UnuqKR3xHgEblRPZaCpT9CqU6s2ggeuFX2ZrYZmQNHtGDzwcttElOV5oi+tmOglCqqLy4qL5h5SiHTj/0o2ysqSF5iipunxT4tFgKoQbC9cwrEaRWKfs9u7hINtCLyOrgWPiQpqZtp2+Mj7e2y+BlM0k1jPwaHKRpPHAsyp5d+AijV/k66AR7ip8FCChQiRhuKF/78oUIWNbYogTTUiiiBuUN90i6co3AT/HJn5LZeSfyTNhw+oXkMyogRuOENNFD8ls5kUwlOV5Vk9HuhnGEXNMlgQnnaKezIdRG7pnFQhDA+2bQR4WiQM2eWgISHhEBbvEKl9Vv4bmUqA/JjsisLrJA2MLwr2cvVzNYdMb4LVoo68YPpFIMdxnMTZ12VX8T7j/6bBOuDEIePWR6JSmF4EFIu+UcHJNvOnEoy8mVG4GCFz5 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(136003)(39860400002)(376002)(346002)(230922051799003)(451199024)(1800799009)(82310400011)(186009)(64100799003)(36840700001)(40470700004)(46966006)(36756003)(30864003)(2906002)(55016003)(82740400003)(316002)(356005)(2616005)(70586007)(6666004)(1076003)(7696005)(54906003)(70206006)(478600001)(426003)(40480700001)(47076005)(83380400001)(40460700003)(336012)(86362001)(5660300002)(4326008)(107886003)(8676002)(41300700001)(36860700001)(6916009)(8936002)(7636003)(16526019)(26005)(6286002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Oct 2023 07:13:30.0079 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d4351f53-a977-4142-96bb-08dbd5f30e97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000971E8.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6429 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reformat HWS code for indirect list actions. Signed-off-by: Gregory Etelson Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.h | 4 +- drivers/net/mlx5/mlx5_flow_hw.c | 235 ++++++++++++++++++-------------- 2 files changed, 131 insertions(+), 108 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 580db80fd4..653f83cf55 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1331,11 +1331,11 @@ struct rte_flow_actions_template { uint64_t action_flags; /* Bit-map of all valid action in template. */ uint16_t dr_actions_num; /* Amount of DR rules actions. */ uint16_t actions_num; /* Amount of flow actions */ - uint16_t *actions_off; /* DR action offset for given rte action offset. */ + uint16_t *dr_off; /* DR action offset for given rte action offset. */ + uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ uint32_t refcnt; /* Reference counter. */ - uint16_t rx_cpy_pos; /* Action position of Rx metadata to be copied. */ uint8_t flex_item; /* flex item index. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b11348c99c..f9f735ba75 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1015,11 +1015,11 @@ flow_hw_modify_field_init(struct mlx5_hw_modify_header_action *mhdr, static __rte_always_inline int flow_hw_modify_field_compile(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - const struct rte_flow_action *action_start, /* Start of AT actions. */ const struct rte_flow_action *action, /* Current action from AT. */ const struct rte_flow_action *action_mask, /* Current mask from AT. */ struct mlx5_hw_actions *acts, struct mlx5_hw_modify_header_action *mhdr, + uint16_t src_pos, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -1122,7 +1122,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev, if (shared) return 0; ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - action - action_start, mhdr->pos, + src_pos, mhdr->pos, cmds_start, cmds_end, shared, field, dcopy, mask); if (ret) @@ -1181,11 +1181,10 @@ flow_hw_validate_compiled_modify_field(struct rte_eth_dev *dev, static int flow_hw_represented_port_compile(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - const struct rte_flow_action *action_start, const struct rte_flow_action *action, const struct rte_flow_action *action_mask, struct mlx5_hw_actions *acts, - uint16_t action_dst, + uint16_t action_src, uint16_t action_dst, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -1241,7 +1240,7 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, } else { ret = __flow_hw_act_data_general_append (priv, acts, action->type, - action - action_start, action_dst); + action_src, action_dst); if (ret) return rte_flow_error_set (error, ENOMEM, @@ -1493,7 +1492,6 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr = &cfg->attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; - struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; @@ -1506,7 +1504,6 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, uint32_t type; bool reformat_used = false; unsigned int of_vlan_offset; - uint16_t action_pos; uint16_t jump_pos; uint32_t ct_idx; int ret, err; @@ -1521,71 +1518,69 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, else type = MLX5DR_TABLE_TYPE_NIC_RX; for (; !actions_end; actions++, masks++) { + uint64_t pos = actions - at->actions; + uint16_t src_pos = pos - at->src_off[pos]; + uint16_t dr_pos = at->dr_off[pos]; + switch ((int)actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: - action_pos = at->actions_off[actions - at->actions]; if (!attr->group) { DRV_LOG(ERR, "Indirect action is not supported in root table."); goto err; } ret = table_template_translate_indirect_list - (dev, actions, masks, acts, - actions - action_start, - action_pos); + (dev, actions, masks, acts, src_pos, dr_pos); if (ret) goto err; break; case RTE_FLOW_ACTION_TYPE_INDIRECT: - action_pos = at->actions_off[actions - at->actions]; if (!attr->group) { DRV_LOG(ERR, "Indirect action is not supported in root table."); goto err; } if (actions->conf && masks->conf) { if (flow_hw_shared_action_translate - (dev, actions, acts, actions - action_start, action_pos)) + (dev, actions, acts, src_pos, dr_pos)) goto err; } else if (__flow_hw_act_data_general_append - (priv, acts, actions->type, - actions - action_start, action_pos)){ + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT, + src_pos, dr_pos)){ goto err; } break; case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_DROP: - action_pos = at->actions_off[actions - at->actions]; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = priv->hw_drop[!!attr->group]; break; case RTE_FLOW_ACTION_TYPE_MARK: - action_pos = at->actions_off[actions - at->actions]; acts->mark = true; if (masks->conf && ((const struct rte_flow_action_mark *) masks->conf)->id) - acts->rule_acts[action_pos].tag.value = + acts->rule_acts[dr_pos].tag.value = mlx5_flow_mark_set (((const struct rte_flow_action_mark *) (actions->conf))->id); else if (__flow_hw_act_data_general_append(priv, acts, - actions->type, actions - action_start, action_pos)) + actions->type, + src_pos, dr_pos)) goto err; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = priv->hw_tag[!!attr->group]; __atomic_fetch_add(&priv->hws_mark_refcnt, 1, __ATOMIC_RELAXED); flow_hw_rxq_flag_set(dev, true); break; case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - action_pos = at->actions_off[actions - at->actions]; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = priv->hw_push_vlan[type]; if (is_template_masked_push_vlan(masks->conf)) - acts->rule_acts[action_pos].push_vlan.vlan_hdr = + acts->rule_acts[dr_pos].push_vlan.vlan_hdr = vlan_hdr_to_be32(actions); else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)) + src_pos, dr_pos)) goto err; of_vlan_offset = is_of_vlan_pcp_present(actions) ? MLX5_HW_VLAN_PUSH_PCP_IDX : @@ -1594,12 +1589,10 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, masks += of_vlan_offset; break; case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - action_pos = at->actions_off[actions - at->actions]; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = priv->hw_pop_vlan[type]; break; case RTE_FLOW_ACTION_TYPE_JUMP: - action_pos = at->actions_off[actions - at->actions]; if (masks->conf && ((const struct rte_flow_action_jump *) masks->conf)->group) { @@ -1610,17 +1603,16 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, (dev, cfg, jump_group, error); if (!acts->jump) goto err; - acts->rule_acts[action_pos].action = (!!attr->group) ? - acts->jump->hws_action : - acts->jump->root_action; + acts->rule_acts[dr_pos].action = (!!attr->group) ? + acts->jump->hws_action : + acts->jump->root_action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)){ + src_pos, dr_pos)){ goto err; } break; case RTE_FLOW_ACTION_TYPE_QUEUE: - action_pos = at->actions_off[actions - at->actions]; if (masks->conf && ((const struct rte_flow_action_queue *) masks->conf)->index) { @@ -1630,16 +1622,15 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, actions); if (!acts->tir) goto err; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = acts->tir->action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)) { + src_pos, dr_pos)) { goto err; } break; case RTE_FLOW_ACTION_TYPE_RSS: - action_pos = at->actions_off[actions - at->actions]; if (actions->conf && masks->conf) { acts->tir = flow_hw_tir_action_register (dev, @@ -1647,11 +1638,11 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, actions); if (!acts->tir) goto err; - acts->rule_acts[action_pos].action = + acts->rule_acts[dr_pos].action = acts->tir->action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)) { + src_pos, dr_pos)) { goto err; } break; @@ -1663,7 +1654,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, enc_item_m = ((const struct rte_flow_action_vxlan_encap *) masks->conf)->definition; reformat_used = true; - reformat_src = actions - action_start; + reformat_src = src_pos; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: @@ -1674,7 +1665,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, enc_item_m = ((const struct rte_flow_action_nvgre_encap *) masks->conf)->definition; reformat_used = true; - reformat_src = actions - action_start; + reformat_src = src_pos; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: @@ -1704,7 +1695,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; } - reformat_src = actions - action_start; + reformat_src = src_pos; break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: reformat_used = true; @@ -1720,34 +1711,22 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, NULL, "Send to kernel action on root table is not supported in HW steering mode"); } - action_pos = at->actions_off[actions - at->actions]; table_type = attr->ingress ? MLX5DR_TABLE_TYPE_NIC_RX : ((attr->egress) ? MLX5DR_TABLE_TYPE_NIC_TX : - MLX5DR_TABLE_TYPE_FDB); - acts->rule_acts[action_pos].action = priv->hw_send_to_kernel[table_type]; + MLX5DR_TABLE_TYPE_FDB); + acts->rule_acts[dr_pos].action = priv->hw_send_to_kernel[table_type]; break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - err = flow_hw_modify_field_compile(dev, attr, action_start, - actions, masks, acts, &mhdr, - error); + err = flow_hw_modify_field_compile(dev, attr, actions, + masks, acts, &mhdr, + src_pos, error); if (err) goto err; - /* - * Adjust the action source position for the following. - * ... / MODIFY_FIELD: rx_cpy_pos / (QUEUE|RSS) / ... - * The next action will be Q/RSS, there will not be - * another adjustment and the real source position of - * the following actions will be decreased by 1. - * No change of the total actions in the new template. - */ - if ((actions - action_start) == at->rx_cpy_pos) - action_start += 1; break; case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - action_pos = at->actions_off[actions - at->actions]; if (flow_hw_represented_port_compile - (dev, attr, action_start, actions, - masks, acts, action_pos, error)) + (dev, attr, actions, + masks, acts, src_pos, dr_pos, error)) goto err; break; case RTE_FLOW_ACTION_TYPE_METER: @@ -1756,19 +1735,18 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, * Calculated DR offset is stored only for ASO_METER and FT * is assumed to be the next action. */ - action_pos = at->actions_off[actions - at->actions]; - jump_pos = action_pos + 1; + jump_pos = dr_pos + 1; if (actions->conf && masks->conf && ((const struct rte_flow_action_meter *) masks->conf)->mtr_id) { err = flow_hw_meter_compile(dev, cfg, - action_pos, jump_pos, actions, acts, error); + dr_pos, jump_pos, actions, acts, error); if (err) goto err; } else if (__flow_hw_act_data_general_append(priv, acts, - actions->type, - actions - action_start, - action_pos)) + actions->type, + src_pos, + dr_pos)) goto err; break; case RTE_FLOW_ACTION_TYPE_AGE: @@ -1781,11 +1759,10 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, NULL, "Age action on root table is not supported in HW steering mode"); } - action_pos = at->actions_off[actions - at->actions]; if (__flow_hw_act_data_general_append(priv, acts, - actions->type, - actions - action_start, - action_pos)) + actions->type, + src_pos, + dr_pos)) goto err; break; case RTE_FLOW_ACTION_TYPE_COUNT: @@ -1806,49 +1783,46 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, * counter. */ break; - action_pos = at->actions_off[actions - at->actions]; if (masks->conf && ((const struct rte_flow_action_count *) masks->conf)->id) { - err = flow_hw_cnt_compile(dev, action_pos, acts); + err = flow_hw_cnt_compile(dev, dr_pos, acts); if (err) goto err; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)) { + src_pos, dr_pos)) { goto err; } break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: - action_pos = at->actions_off[actions - at->actions]; if (masks->conf) { ct_idx = MLX5_ACTION_CTX_CT_GET_IDX ((uint32_t)(uintptr_t)actions->conf); if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx, - &acts->rule_acts[action_pos])) + &acts->rule_acts[dr_pos])) goto err; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, action_pos)) { + src_pos, dr_pos)) { goto err; } break; case RTE_FLOW_ACTION_TYPE_METER_MARK: - action_pos = at->actions_off[actions - at->actions]; if (actions->conf && masks->conf && ((const struct rte_flow_action_meter_mark *) masks->conf)->profile) { err = flow_hw_meter_mark_compile(dev, - action_pos, actions, - acts->rule_acts, - &acts->mtr_id, - MLX5_HW_INV_QUEUE); + dr_pos, actions, + acts->rule_acts, + &acts->mtr_id, + MLX5_HW_INV_QUEUE); if (err) goto err; } else if (__flow_hw_act_data_general_append(priv, acts, - actions->type, - actions - action_start, - action_pos)) + actions->type, + src_pos, + dr_pos)) goto err; break; case RTE_FLOW_ACTION_TYPE_END: @@ -1931,7 +1905,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (shared_rfmt) acts->rule_acts[at->reformat_off].reformat.offset = 0; else if (__flow_hw_act_data_encap_append(priv, acts, - (action_start + reformat_src)->type, + (at->actions + reformat_src)->type, reformat_src, at->reformat_off, data_size)) goto err; acts->encap_decap->shared = shared_rfmt; @@ -4283,6 +4257,31 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused, return 0; } +/** + * Process `... / raw_decap / raw_encap / ...` actions sequence. + * The PMD handles the sequence as a single encap or decap reformat action, + * depending on the raw_encap configuration. + * + * The function assumes that the raw_decap / raw_encap location + * in actions template list complies with relative HWS actions order: + * for the required reformat configuration: + * ENCAP configuration must appear before [JUMP|DROP|PORT] + * DECAP configuration must appear at the template head. + */ +static uint64_t +mlx5_decap_encap_reformat_type(const struct rte_flow_action *actions, + uint32_t encap_ind, uint64_t flags) +{ + const struct rte_flow_action_raw_encap *encap = actions[encap_ind].conf; + + if ((flags & MLX5_FLOW_ACTION_DECAP) == 0) + return MLX5_FLOW_ACTION_ENCAP; + if (actions[encap_ind - 1].type != RTE_FLOW_ACTION_TYPE_RAW_DECAP) + return MLX5_FLOW_ACTION_ENCAP; + return encap->size >= MLX5_ENCAPSULATION_DECISION_SIZE ? + MLX5_FLOW_ACTION_ENCAP : MLX5_FLOW_ACTION_DECAP; +} + static inline uint16_t flow_hw_template_expand_modify_field(struct rte_flow_action actions[], struct rte_flow_action masks[], @@ -4320,13 +4319,13 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[], */ for (i = act_num - 2; (int)i >= 0; i--) { enum rte_flow_action_type type = actions[i].type; + uint64_t reformat_type; if (type == RTE_FLOW_ACTION_TYPE_INDIRECT) type = masks[i].type; switch (type) { case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: case RTE_FLOW_ACTION_TYPE_DROP: case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: case RTE_FLOW_ACTION_TYPE_JUMP: @@ -4337,10 +4336,20 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[], case RTE_FLOW_ACTION_TYPE_VOID: case RTE_FLOW_ACTION_TYPE_END: break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + reformat_type = + mlx5_decap_encap_reformat_type(actions, i, + flags); + if (reformat_type == MLX5_FLOW_ACTION_DECAP) { + i++; + goto insert; + } + if (actions[i - 1].type == RTE_FLOW_ACTION_TYPE_RAW_DECAP) + i--; + break; default: i++; /* new MF inserted AFTER actions[i] */ goto insert; - break; } } i = 0; @@ -4649,7 +4658,7 @@ action_template_set_type(struct rte_flow_actions_template *at, unsigned int action_src, uint16_t *curr_off, enum mlx5dr_action_type type) { - at->actions_off[action_src] = *curr_off; + at->dr_off[action_src] = *curr_off; action_types[*curr_off] = type; *curr_off = *curr_off + 1; } @@ -4680,11 +4689,13 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, * Both AGE and COUNT action need counter, the first one fills * the action_types array, and the second only saves the offset. */ - if (*cnt_off == UINT16_MAX) + if (*cnt_off == UINT16_MAX) { + *cnt_off = *curr_off; action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_CTR); - at->actions_off[action_src] = *cnt_off; + } + at->dr_off[action_src] = *cnt_off; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: action_template_set_type(at, action_types, action_src, curr_off, @@ -4804,7 +4815,7 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) } break; case RTE_FLOW_ACTION_TYPE_METER: - at->actions_off[i] = curr_off; + at->dr_off[i] = curr_off; action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER; if (curr_off >= MLX5_HW_MAX_ACTS) goto err_actions_num; @@ -4812,14 +4823,14 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) break; case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: type = mlx5_hw_dr_action_types[at->actions[i].type]; - at->actions_off[i] = curr_off; + at->dr_off[i] = curr_off; action_types[curr_off++] = type; i += is_of_vlan_pcp_present(at->actions + i) ? MLX5_HW_VLAN_PUSH_PCP_IDX : MLX5_HW_VLAN_PUSH_VID_IDX; break; case RTE_FLOW_ACTION_TYPE_METER_MARK: - at->actions_off[i] = curr_off; + at->dr_off[i] = curr_off; action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER; if (curr_off >= MLX5_HW_MAX_ACTS) goto err_actions_num; @@ -4835,11 +4846,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) cnt_off = curr_off++; action_types[cnt_off] = MLX5DR_ACTION_TYP_CTR; } - at->actions_off[i] = cnt_off; + at->dr_off[i] = cnt_off; break; default: type = mlx5_hw_dr_action_types[at->actions[i].type]; - at->actions_off[i] = curr_off; + at->dr_off[i] = curr_off; action_types[curr_off++] = type; break; } @@ -5112,6 +5123,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS]; struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS]; uint32_t expand_mf_num = 0; + uint16_t src_off[MLX5_HW_MAX_ACTS] = {0, }; if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks, &action_flags, error)) @@ -5190,6 +5202,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, act_num, expand_mf_num); act_num += expand_mf_num; + for (i = pos + expand_mf_num; i < act_num; i++) + src_off[i] += expand_mf_num; action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; } act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error); @@ -5200,7 +5214,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, if (mask_len <= 0) return NULL; len += RTE_ALIGN(mask_len, 16); - len += RTE_ALIGN(act_num * sizeof(*at->actions_off), 16); + len += RTE_ALIGN(act_num * sizeof(*at->dr_off), 16); + len += RTE_ALIGN(act_num * sizeof(*at->src_off), 16); at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), RTE_CACHE_LINE_SIZE, rte_socket_id()); if (!at) { @@ -5224,13 +5239,15 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, if (mask_len <= 0) goto error; /* DR actions offsets in the third part. */ - at->actions_off = (uint16_t *)((uint8_t *)at->masks + mask_len); + at->dr_off = (uint16_t *)((uint8_t *)at->masks + mask_len); + at->src_off = RTE_PTR_ADD(at->dr_off, + RTE_ALIGN(act_num * sizeof(*at->dr_off), 16)); + memcpy(at->src_off, src_off, act_num * sizeof(at->src_off[0])); at->actions_num = act_num; for (i = 0; i < at->actions_num; ++i) - at->actions_off[i] = UINT16_MAX; + at->dr_off[i] = UINT16_MAX; at->reformat_off = UINT16_MAX; at->mhdr_off = UINT16_MAX; - at->rx_cpy_pos = pos; for (i = 0; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++, masks++, i++) { const struct rte_flow_action_modify_field *info; @@ -9547,14 +9564,15 @@ mlx5_mirror_terminal_action(const struct rte_flow_action *action) static bool mlx5_mirror_validate_sample_action(struct rte_eth_dev *dev, - const struct rte_flow_action *action) + const struct rte_flow_attr *flow_attr, + const struct rte_flow_action *action) { struct mlx5_priv *priv = dev->data->dev_private; switch (action->type) { case RTE_FLOW_ACTION_TYPE_QUEUE: case RTE_FLOW_ACTION_TYPE_RSS: - if (priv->sh->esw_mode) + if (flow_attr->transfer) return false; break; case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: @@ -9562,7 +9580,7 @@ mlx5_mirror_validate_sample_action(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RAW_DECAP: case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (!priv->sh->esw_mode) + if (!priv->sh->esw_mode && !flow_attr->transfer) return false; if (action[0].type == RTE_FLOW_ACTION_TYPE_RAW_DECAP && action[1].type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) @@ -9584,19 +9602,22 @@ mlx5_mirror_validate_sample_action(struct rte_eth_dev *dev, */ static int mlx5_hw_mirror_actions_list_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *flow_attr, const struct rte_flow_action *actions) { if (actions[0].type == RTE_FLOW_ACTION_TYPE_SAMPLE) { int i = 1; bool valid; const struct rte_flow_action_sample *sample = actions[0].conf; - valid = mlx5_mirror_validate_sample_action(dev, sample->actions); + valid = mlx5_mirror_validate_sample_action(dev, flow_attr, + sample->actions); if (!valid) return -EINVAL; if (actions[1].type == RTE_FLOW_ACTION_TYPE_SAMPLE) { i = 2; sample = actions[1].conf; - valid = mlx5_mirror_validate_sample_action(dev, sample->actions); + valid = mlx5_mirror_validate_sample_action(dev, flow_attr, + sample->actions); if (!valid) return -EINVAL; } @@ -9780,6 +9801,7 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, struct mlx5_mirror *mirror; enum mlx5dr_table_type table_type; struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *flow_attr = &table_cfg->attr.flow_attr; uint8_t reformat_buf[MLX5_MIRROR_MAX_CLONES_NUM][MLX5_ENCAP_MAX_LEN]; struct mlx5dr_action_dest_attr mirror_attr[MLX5_MIRROR_MAX_CLONES_NUM + 1]; enum mlx5dr_action_type array_action_types[MLX5_MIRROR_MAX_CLONES_NUM + 1] @@ -9787,9 +9809,10 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, memset(mirror_attr, 0, sizeof(mirror_attr)); memset(array_action_types, 0, sizeof(array_action_types)); - table_type = get_mlx5dr_table_type(&table_cfg->attr.flow_attr); + table_type = get_mlx5dr_table_type(flow_attr); hws_flags = mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_NONE_ROOT][table_type]; - clones_num = mlx5_hw_mirror_actions_list_validate(dev, actions); + clones_num = mlx5_hw_mirror_actions_list_validate(dev, flow_attr, + actions); if (clones_num < 0) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, "Invalid mirror list format"); -- 2.39.2