From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 945564412E; Sun, 2 Jun 2024 08:01:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6E7642DF3; Sun, 2 Jun 2024 08:01:45 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2077.outbound.protection.outlook.com [40.107.212.77]) by mails.dpdk.org (Postfix) with ESMTP id F1ACC42DE2 for ; Sun, 2 Jun 2024 08:01:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Oeivg8HYOAphcIIHJV3MWX5yPoIVL68tmiLadVD0WbhZVi3/EptIlDQaEyyc9uwIuYmrv8XXe7HoODaFPO+YC+JGfgEdG4pmwgaFOlM0z/pRsxzYIwWxJaEvO2nwUjF+AZFgCAHovQE0Jq6FPnyaPEZ+hO4QrEReA1FE9TH19bJukyN/fX2+275RxOSuYrfPc6UjSyxQotLtbFARG25ysxnb6B4M9EEsnHaJ0yqopZ8raDyn2xCrcwsJBAVPAzfpOOl6558d2qW8f18k5SdscVQxeHWumOl9RDFjkCiBGuaWZpdBI6Ff5jPbYw06Ct+MhqREW1KSPF6pmH1UFRvnMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ky4b0zvqyNf0QG9Iwdqwa7NZLABkm0pQL8eLsmxpt1w=; b=jy67bZGACclox8NIF60HcNEQAi0yrL7+D7RNsYGp/KHmEV+wvONI8mRQgSXnUKeJ8LBXjfJhhtOOgy+lV4XJ0zRU9pQc0znmSdM79TVGYTnCJsWrFPpC5YsqmR17f94V81cinFkVKQb+loX+2eL7GtNQdKGocHrUmi07oBZ8xa5x763hPLJ5slAJoAq2QqKT5yelpFqMepWFbjOyUQSPaugasP624s4q77zek9nYcnH/xl151PcQj5UutLPuszE8FxyfEtKbWkQmIgmn0qqHvo6gJVFycUIzpoV62Lzmv2EyGrxOpkvEW6BOWLs5tP57YahMrpgYJf5+vhfQwGSprg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ky4b0zvqyNf0QG9Iwdqwa7NZLABkm0pQL8eLsmxpt1w=; b=dVRK4zOScvYdK6PT4r1L/W17euceANDTbf/TEz5bK1XtFw62QKGA4EJImv0wHbPHUQ78bVknmoVdxTNHdLcwqWnqqurLoGXNPpvUCyHEPPqPAOtVNI8tnp90A9vPm30br4aF6PbRgW4/3JS2377inF9Af6q6Byu4QpqUgcIAeRSWRZAzRIINL+8vV0aJZtwdwrx6MC/5wUWpuC1qjRQtoDUnK+kHenUsKHBbuGSrHW/pSAA43FkGa+gYFwLk82Er5iH3/qfAIMsOPWYokUhDc+nT7rObEEjuAIYZN9xfIWdGj3vZk2KfjmcKH750zWcyIuLVD2IP7bVFA8HDpIqAIw== Received: from BN1PR13CA0005.namprd13.prod.outlook.com (2603:10b6:408:e2::10) by DS7PR12MB5840.namprd12.prod.outlook.com (2603:10b6:8:7b::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.25; Sun, 2 Jun 2024 06:01:30 +0000 Received: from BN2PEPF000044AC.namprd04.prod.outlook.com (2603:10b6:408:e2:cafe::b8) by BN1PR13CA0005.outlook.office365.com (2603:10b6:408:e2::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.13 via Frontend Transport; Sun, 2 Jun 2024 06:01:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN2PEPF000044AC.mail.protection.outlook.com (10.167.243.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Sun, 2 Jun 2024 06:01:29 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 1 Jun 2024 23:01:14 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 1 Jun 2024 23:01:11 -0700 From: Gregory Etelson To: CC: , =?UTF-8?q?=C2=A0?= , , Dariusz Sosnowski , "Viacheslav Ovsiienko" , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH 2/2] net/mlx5: validate flow actions in table creation Date: Sun, 2 Jun 2024 09:00:50 +0300 Message-ID: <20240602060051.42910-3-getelson@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240602060051.42910-1-getelson@nvidia.com> References: <20240602060051.42910-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN2PEPF000044AC:EE_|DS7PR12MB5840:EE_ X-MS-Office365-Filtering-Correlation-Id: efd767ea-ab43-4bc1-da21-08dc82c9725e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|1800799015|82310400017|36860700004|376005; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Nzz3rRtyOHbMAvZtYFniaeRuyt4lUsEvcQtuMJk3wRQKB6GNij+n6xHgVtOw?= =?us-ascii?Q?KkE/0Xcvs1jNqD/X1VzNCmm0LGKPzWtsu+dsCoi4qis8kiiBLfih3TSVXNd4?= =?us-ascii?Q?/eyXrJmrB4Aeu4KxVkimz6m+GNA1IFvAwnt3Gc9P2TgCthsnAmhG+2Z7NKS1?= =?us-ascii?Q?MQvCqJVDRPyhhxyxcVU0LI1YRoQg7hv5JRFi1y9p/pYRJfin4wBaNklzWVQE?= =?us-ascii?Q?MVo2dNxEb3kPvk65KkLHhmGrE7u5rwvwYW4S6AfN6suVE1Bot9TAjUORodvt?= =?us-ascii?Q?m2wVpmFoFGD5mzJ2b+PzDjS/nPTuNvZSbPlQE1JlyTinQ/NiDfP21JwfFPsv?= =?us-ascii?Q?NhhlOEnQiLF09gJYteUKDa9KweVIbW4GFSQoXLBiGrrHQRGhQcA2cUiul4UV?= =?us-ascii?Q?oorBpzdqUj65AiRLmNv3O7ihdXPJsCn82Amzt1vGKPzSVXrVR9wwey53mu8I?= =?us-ascii?Q?xBmNfSnPYdXXcD9L3m220EPb+beT4tKUErqH/jzUHs4brrkDM/LClj5CYbNO?= =?us-ascii?Q?OMOOLihG1dlWNC+9HYLdGrkWnby5UD/j+mpWT6OP4KEiEjPBIZ0+Id5JId9L?= =?us-ascii?Q?BCDJsvZok8rLuNrtRfydpZ1xbTkLggzokMxwcuDG829PkPFbIWXAEU/rs7Od?= =?us-ascii?Q?JNeWqMhqHbQ+vAP+PVP5MIw97eCQ2Rx4yRhSqgriAqbIEb/acGV5Fo5b6YqQ?= =?us-ascii?Q?4ksuCHC6LoxUhHs9rWN7HxPcDtC5mwIlYGBUi1mOM8fZRDnr8vluNPzhyRnY?= =?us-ascii?Q?w+AAZcBu5/pC8tWkCYiuZuPKrleQ/mn06vbSwWuHskkEBR37jjuoKoBeKbg/?= =?us-ascii?Q?X+trg3B97jRxBzIAMdUiQNnlZvv234PtQSwRX44Pi2nIWzvL0cczvlQSKgbS?= =?us-ascii?Q?l2GW6RTH9aN9eDuQt4tVf7/Zy2r71mPdj/qjwlgWDCSbKgm3rphpUcRTOb//?= =?us-ascii?Q?WrKZzubRV0cu9Gi5H4Ptb0BgjD1xbMSCJe14MsdVPQRPN6ul0uWbQPdxYTx+?= =?us-ascii?Q?PtFL9FSryd0IlYyryCTOQ48G9EYHvu3GMzNyc0MPrXTumRDLxPiUU04Dl7h5?= =?us-ascii?Q?qAVRH4TvQWT5cDZUlzF26O0f3UrtEno+zwfdK3PlD4GCnuuVra/L1Lwm0W05?= =?us-ascii?Q?9nIBfYCRrSp61xujZlpArP5tnyhC8P9Vi53QgYfdf3UGVUzyfwEC2N2aKcfs?= =?us-ascii?Q?GTo/gtgi/EH7XOIiE9bISE/D1nVoTkMK6CRTyZSB5OnmUhn5SVb4EJAnvFwm?= =?us-ascii?Q?I4bT90apDWODcAI2KzMteHJbPZlibKvesZKro/yKT+gFlkyESkmyCqmEum6F?= =?us-ascii?Q?1YbZk4YnRrzlk3+fcgcSptCNfz2/AAHGWD3q5B/wH5f1nOb+/+u7uaySgmug?= =?us-ascii?Q?zaDusfPzAY+Kell+X8w3u77IWrNi?= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(82310400017)(36860700004)(376005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2024 06:01:29.6748 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: efd767ea-ab43-4bc1-da21-08dc82c9725e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000044AC.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5840 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic actions validation before creating flow table. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 13 + drivers/net/mlx5/mlx5_flow.c | 15 +- drivers/net/mlx5/mlx5_flow.h | 33 ++- drivers/net/mlx5/mlx5_flow_dv.c | 20 +- drivers/net/mlx5/mlx5_flow_hw.c | 431 +++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_flow_verbs.c | 2 +- 6 files changed, 445 insertions(+), 69 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9e4a5feb49..e2c22ffe97 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2010,6 +2010,19 @@ struct mlx5_priv { RTE_ATOMIC(uint16_t) shared_refcnt; /* HW steering host reference counter. */ }; +static __rte_always_inline bool +mlx5_hws_active(const struct rte_eth_dev *dev) +{ +#if defined(HAVE_MLX5_HWS_SUPPORT) + const struct mlx5_priv *priv = dev->data->dev_private; + + return priv->sh->config.dv_flow_en == 2; +#else + RTE_SET_USED(dev); + return false; +#endif +} + #define PORT_ID(priv) ((priv)->dev_data->port_id) #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)]) #define CTRL_QUEUE_ID(priv) ((priv)->nb_queue - 1) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 8eafceff37..c90b87c8ef 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1939,7 +1939,8 @@ mlx5_flow_validate_action_flag(uint64_t action_flags, * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_flow_validate_action_mark(const struct rte_flow_action *action, +mlx5_flow_validate_action_mark(struct rte_eth_dev *dev, + const struct rte_flow_action *action, uint64_t action_flags, const struct rte_flow_attr *attr, struct rte_flow_error *error) @@ -1971,6 +1972,10 @@ mlx5_flow_validate_action_mark(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, "mark action not supported for " "egress"); + if (attr->transfer && mlx5_hws_active(dev)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + "non-template mark action not supported for transfer"); return 0; } @@ -2039,6 +2044,10 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_queue *queue = action->conf; + if (!queue) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no QUEUE action configuration"); if (action_flags & MLX5_FLOW_FATE_ACTIONS) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, @@ -2152,6 +2161,10 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, const char *message; uint32_t queue_idx; + if (!rss) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "no RSS action configuration"); if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { DRV_LOG(WARNING, "port %u symmetric RSS supported with SORT", dev->data->port_id); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8b4088e35e..dd5b30a8a4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2874,7 +2874,8 @@ int mlx5_flow_validate_action_drop(struct rte_eth_dev *dev, int mlx5_flow_validate_action_flag(uint64_t action_flags, const struct rte_flow_attr *attr, struct rte_flow_error *error); -int mlx5_flow_validate_action_mark(const struct rte_flow_action *action, +int mlx5_flow_validate_action_mark(struct rte_eth_dev *dev, + const struct rte_flow_action *action, uint64_t action_flags, const struct rte_flow_attr *attr, struct rte_flow_error *error); @@ -2895,6 +2896,33 @@ int mlx5_flow_validate_action_default_miss(uint64_t action_flags, int flow_validate_modify_field_level (const struct rte_flow_field_data *data, struct rte_flow_error *error); +int +flow_dv_validate_action_l2_encap(struct rte_eth_dev *dev, + uint64_t action_flags, + const struct rte_flow_action *action, + const struct rte_flow_attr *attr, + struct rte_flow_error *error); +int +flow_dv_validate_action_decap(struct rte_eth_dev *dev, + uint64_t action_flags, + const struct rte_flow_action *action, + const uint64_t item_flags, + const struct rte_flow_attr *attr, + struct rte_flow_error *error); +int +flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, + uint64_t action_flags, + uint64_t item_flags, + bool root, + struct rte_flow_error *error); +int +flow_dv_validate_action_raw_encap_decap + (struct rte_eth_dev *dev, + const struct rte_flow_action_raw_decap *decap, + const struct rte_flow_action_raw_encap *encap, + const struct rte_flow_attr *attr, uint64_t *action_flags, + int *actions_n, const struct rte_flow_action *action, + uint64_t item_flags, struct rte_flow_error *error); int mlx5_flow_item_acceptable(const struct rte_flow_item *item, const uint8_t *mask, const uint8_t *nic_mask, @@ -3348,5 +3376,8 @@ mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, void mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, struct mlx5_indirect_list *reformat); + +extern const struct rte_flow_action_raw_decap empty_decap; + #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 6f72185916..06f5427abf 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3659,7 +3659,7 @@ flow_dv_validate_action_mark(struct rte_eth_dev *dev, "if tunnel offload active"); /* Fall back if no extended metadata register support. */ if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY) - return mlx5_flow_validate_action_mark(action, action_flags, + return mlx5_flow_validate_action_mark(dev, action, action_flags, attr, error); /* Extensive metadata mode requires registers. */ if (!mlx5_flow_ext_mreg_supported(dev)) @@ -3898,7 +3898,7 @@ flow_dv_validate_action_count(struct rte_eth_dev *dev, bool shared, * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_validate_action_l2_encap(struct rte_eth_dev *dev, uint64_t action_flags, const struct rte_flow_action *action, @@ -3943,7 +3943,7 @@ flow_dv_validate_action_l2_encap(struct rte_eth_dev *dev, * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_validate_action_decap(struct rte_eth_dev *dev, uint64_t action_flags, const struct rte_flow_action *action, @@ -4016,7 +4016,7 @@ const struct rte_flow_action_raw_decap empty_decap = {.data = NULL, .size = 0,}; * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_validate_action_raw_encap_decap (struct rte_eth_dev *dev, const struct rte_flow_action_raw_decap *decap, @@ -4105,7 +4105,7 @@ flow_dv_validate_action_raw_encap_decap * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, uint64_t action_flags, uint64_t item_flags, @@ -4124,10 +4124,12 @@ flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "CT cannot follow a fate action"); if ((action_flags & MLX5_FLOW_ACTION_METER) || - (action_flags & MLX5_FLOW_ACTION_AGE)) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "Only one ASO action is supported"); + (action_flags & MLX5_FLOW_ACTION_AGE)) { + if (!mlx5_hws_active(dev)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Only one ASO action is supported"); + } if (action_flags & MLX5_FLOW_ACTION_ENCAP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 427d7f2359..a60d1e594e 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4609,6 +4609,25 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, return rte_errno; } +static bool +flow_hw_validate_template_domain(const struct rte_flow_attr *table_attr, + uint32_t ingress, uint32_t egress, uint32_t transfer) +{ + if (table_attr->ingress) + return ingress != 0; + else if (table_attr->egress) + return egress != 0; + else + return transfer; +} + +static bool +flow_hw_validate_table_domain(const struct rte_flow_attr *table_attr) +{ + return table_attr->ingress + table_attr->egress + table_attr->transfer + == 1; +} + /** * Create flow table. * @@ -4679,6 +4698,38 @@ flow_hw_table_create(struct rte_eth_dev *dev, size_t tbl_mem_size; int err; + if (!flow_hw_validate_table_domain(&attr->flow_attr)) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "invalid table domain attributes"); + return NULL; + } + for (i = 0; i < nb_item_templates; i++) { + const struct rte_flow_pattern_template_attr *pt_attr = + &item_templates[i]->attr; + bool match = flow_hw_validate_template_domain(&attr->flow_attr, + pt_attr->ingress, + pt_attr->egress, + pt_attr->transfer); + if (!match) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "pattern template domain does not match table"); + return NULL; + } + } + for (i = 0; i < nb_action_templates; i++) { + const struct rte_flow_actions_template *at = action_templates[i]; + bool match = flow_hw_validate_template_domain(&attr->flow_attr, + at->attr.ingress, + at->attr.egress, + at->attr.transfer); + if (!match) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "action template domain does not match table"); + return NULL; + } + } /* HWS layer accepts only 1 item template with root table. */ if (!attr->flow_attr.group) max_tpl = 1; @@ -6026,42 +6077,6 @@ flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, return 0; } -/** - * Validate raw_encap action. - * - * @param[in] dev - * Pointer to rte_eth_dev structure. - * @param[in] action - * Pointer to the indirect action. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -static int -flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, - const struct rte_flow_action *mask, - struct rte_flow_error *error) -{ - const struct rte_flow_action_raw_encap *mask_conf = mask->conf; - const struct rte_flow_action_raw_encap *action_conf = action->conf; - - if (!mask_conf || !mask_conf->size) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, mask, - "raw_encap: size must be masked"); - if (!action_conf || !action_conf->size) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "raw_encap: invalid action configuration"); - if (mask_conf->data && !action_conf->data) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "raw_encap: masked data is missing"); - return 0; -} - /** * Process `... / raw_decap / raw_encap / ...` actions sequence. * The PMD handles the sequence as a single encap or decap reformat action, @@ -6378,6 +6393,278 @@ flow_hw_validate_action_nat64(struct rte_eth_dev *dev, NULL, "NAT64 action is not supported."); } +static int +flow_hw_validate_action_jump(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ + const struct rte_flow_action_jump *m = mask->conf; + const struct rte_flow_action_jump *v = action->conf; + struct mlx5_flow_template_table_cfg cfg = { + .external = true, + .attr = { + .flow_attr = { + .ingress = attr->ingress, + .egress = attr->egress, + .transfer = attr->transfer, + }, + }, + }; + uint32_t t_group = 0; + + if (!m || !m->group) + return 0; + if (!v) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid jump action configuration"); + if (flow_hw_translate_group(dev, &cfg, v->group, &t_group, error)) + return -rte_errno; + if (t_group == 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported action - jump to root table"); + return 0; +} + +static int +mlx5_hw_validate_action_mark(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + uint64_t action_flags, + const struct rte_flow_actions_template_attr *template_attr, + struct rte_flow_error *error) +{ + const struct rte_flow_action_mark *mark_mask = template_mask->conf; + const struct rte_flow_action *action = + mark_mask && mark_mask->id ? template_action : + &(const struct rte_flow_action) { + .type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(const struct rte_flow_action_mark) { + .id = MLX5_FLOW_MARK_MAX - 1 + } + }; + const struct rte_flow_attr attr = { + .ingress = template_attr->ingress, + .egress = template_attr->egress, + .transfer = template_attr->transfer + }; + + return mlx5_flow_validate_action_mark(dev, action, action_flags, + &attr, error); +} + +#define MLX5_FLOW_DEFAULT_INGRESS_QUEUE 0 + +static int +mlx5_hw_validate_action_queue(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct rte_flow_actions_template_attr *template_attr, + uint64_t action_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action_queue *queue_mask = template_mask->conf; + const struct rte_flow_action *action = + queue_mask && queue_mask->index ? template_action : + &(const struct rte_flow_action) { + .type = RTE_FLOW_ACTION_TYPE_QUEUE, + .conf = &(const struct rte_flow_action_queue) { + .index = MLX5_FLOW_DEFAULT_INGRESS_QUEUE + } + }; + const struct rte_flow_attr attr = { + .ingress = template_attr->ingress, + .egress = template_attr->egress, + .transfer = template_attr->transfer + }; + + return mlx5_flow_validate_action_queue(action, action_flags, + dev, &attr, error); +} + +static int +mlx5_hw_validate_action_rss(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct rte_flow_actions_template_attr *template_attr, + __rte_unused uint64_t action_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *mask = template_mask->conf; + const struct rte_flow_action *action = mask ? template_action : + &(const struct rte_flow_action) { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = &(const struct rte_flow_action_rss) { + .queue_num = 1, + .queue = (uint16_t [1]) { + MLX5_FLOW_DEFAULT_INGRESS_QUEUE + } + } + }; + + if (template_attr->egress || template_attr->transfer) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "RSS action supported for ingress only"); + return mlx5_validate_action_rss(dev, action, error); +} + +static int +mlx5_hw_validate_action_l2_encap(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct rte_flow_actions_template_attr *template_attr, + uint64_t action_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action_vxlan_encap default_action_conf = { + .definition = (struct rte_flow_item *) + (struct rte_flow_item [1]) { + [0] = { .type = RTE_FLOW_ITEM_TYPE_END } + } + }; + const struct rte_flow_action *action = template_mask->conf ? + template_action : &(const struct rte_flow_action) { + .type = template_mask->type, + .conf = &default_action_conf + }; + const struct rte_flow_attr attr = { + .ingress = template_attr->ingress, + .egress = template_attr->egress, + .transfer = template_attr->transfer + }; + + return flow_dv_validate_action_l2_encap(dev, action_flags, action, + &attr, error); +} + +static int +mlx5_hw_validate_action_l2_decap(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct rte_flow_actions_template_attr *template_attr, + uint64_t action_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action_vxlan_encap default_action_conf = { + .definition = (struct rte_flow_item *) + (struct rte_flow_item [1]) { + [0] = { .type = RTE_FLOW_ITEM_TYPE_END } + } + }; + const struct rte_flow_action *action = template_mask->conf ? + template_action : &(const struct rte_flow_action) { + .type = template_mask->type, + .conf = &default_action_conf + }; + const struct rte_flow_attr attr = { + .ingress = template_attr->ingress, + .egress = template_attr->egress, + .transfer = template_attr->transfer + }; + uint64_t item_flags = + action->type == RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ? + MLX5_FLOW_LAYER_VXLAN : 0; + + return flow_dv_validate_action_decap(dev, action_flags, action, + item_flags, &attr, error); +} + +static int +mlx5_hw_validate_action_conntrack(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct rte_flow_actions_template_attr *template_attr, + uint64_t action_flags, + struct rte_flow_error *error) +{ + RTE_SET_USED(template_action); + RTE_SET_USED(template_mask); + RTE_SET_USED(template_attr); + return flow_dv_validate_action_aso_ct(dev, action_flags, + MLX5_FLOW_LAYER_OUTER_L4_TCP, + false, error); +} + +static int +flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ + const struct rte_flow_action_raw_encap *mask_conf = mask->conf; + const struct rte_flow_action_raw_encap *action_conf = action->conf; + + if (!mask_conf || !mask_conf->size) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, mask, + "raw_encap: size must be masked"); + if (!action_conf || !action_conf->size) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "raw_encap: invalid action configuration"); + if (mask_conf->data && !action_conf->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, "raw_encap: masked data is missing"); + return 0; +} + + +static int +flow_hw_validate_action_raw_reformat(struct rte_eth_dev *dev, + const struct rte_flow_action *template_action, + const struct rte_flow_action *template_mask, + const struct + rte_flow_actions_template_attr *template_attr, + uint64_t *action_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action *encap_action = NULL; + const struct rte_flow_action *encap_mask = NULL; + const struct rte_flow_action_raw_decap *raw_decap = NULL; + const struct rte_flow_action_raw_encap *raw_encap = NULL; + const struct rte_flow_attr attr = { + .ingress = template_attr->ingress, + .egress = template_attr->egress, + .transfer = template_attr->transfer + }; + uint64_t item_flags = 0; + int ret, actions_n = 0; + + if (template_action->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP) { + raw_decap = template_mask->conf ? + template_action->conf : &empty_decap; + if ((template_action + 1)->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if ((template_mask + 1)->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + template_mask + 1, "invalid mask type"); + encap_action = template_action + 1; + encap_mask = template_mask + 1; + } + } else { + encap_action = template_action; + encap_mask = template_mask; + } + if (encap_action) { + raw_encap = encap_action->conf; + ret = flow_hw_validate_action_raw_encap(encap_action, + encap_mask, error); + if (ret) + return ret; + } + return flow_dv_validate_action_raw_encap_decap(dev, raw_decap, + raw_encap, &attr, + action_flags, &actions_n, + template_action, + item_flags, error); +} + + + static int mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, @@ -6432,15 +6719,27 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, return ret; break; case RTE_FLOW_ACTION_TYPE_MARK: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_mark(dev, action, mask, + action_flags, + attr, error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_MARK; break; case RTE_FLOW_ACTION_TYPE_DROP: - /* TODO: Validation logic */ + ret = mlx5_flow_validate_action_drop + (dev, action_flags, + &(struct rte_flow_attr){.egress = attr->egress}, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_DROP; break; case RTE_FLOW_ACTION_TYPE_JUMP: - /* TODO: Validation logic */ + /* Only validate the jump to root table in template stage. */ + ret = flow_hw_validate_action_jump(dev, attr, action, mask, error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_JUMP; break; #ifdef HAVE_MLX5DV_DR_ACTION_CREATE_DEST_ROOT_TABLE @@ -6462,38 +6761,52 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, break; #endif case RTE_FLOW_ACTION_TYPE_QUEUE: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_queue(dev, action, mask, + attr, action_flags, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_QUEUE; break; case RTE_FLOW_ACTION_TYPE_RSS: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_rss(dev, action, mask, + attr, action_flags, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_RSS; break; case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - /* TODO: Validation logic */ - action_flags |= MLX5_FLOW_ACTION_ENCAP; - break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_l2_encap(dev, action, mask, + attr, action_flags, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_ENCAP; break; case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - /* TODO: Validation logic */ - action_flags |= MLX5_FLOW_ACTION_DECAP; - break; case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_l2_decap(dev, action, mask, + attr, action_flags, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_DECAP; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - ret = flow_hw_validate_action_raw_encap(action, mask, error); - if (ret < 0) - return ret; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - /* TODO: Validation logic */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + ret = flow_hw_validate_action_raw_reformat(dev, action, + mask, attr, + &action_flags, + error); + if (ret) + return ret; + if (action->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP && + (action + 1)->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + action_flags |= MLX5_FLOW_XCAP_ACTIONS; + i++; + } break; case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); @@ -6561,7 +6874,11 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_COUNT; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: - /* TODO: Validation logic */ + ret = mlx5_hw_validate_action_conntrack(dev, action, mask, + attr, action_flags, + error); + if (ret) + return ret; action_flags |= MLX5_FLOW_ACTION_CT; break; case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index fe9c818abc..9879f14213 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1522,7 +1522,7 @@ flow_verbs_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_FLAG; break; case RTE_FLOW_ACTION_TYPE_MARK: - ret = mlx5_flow_validate_action_mark(actions, + ret = mlx5_flow_validate_action_mark(dev, actions, action_flags, attr, error); -- 2.43.0