From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44DFD44173; Thu, 6 Jun 2024 12:12:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FC5242DD4; Thu, 6 Jun 2024 12:12:40 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2086.outbound.protection.outlook.com [40.107.237.86]) by mails.dpdk.org (Postfix) with ESMTP id 3A15B42DD4 for ; Thu, 6 Jun 2024 12:12:38 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V4lUsxp1Q8OxDVMoE2EL71PIfXrbIJCBdja3X+uC9Ks9F8Ei5VXKLWQQg4CQ+n5Cr946ltppiuvlaF6RGNOQ0zrt0p+tNd4pZ4fbHwJCOyiOLgkCsrlv+wKyx8O8lQY4JdEFPpRgPDN3qGXiq6uI4405zv3zx2i388L2gSaNx1GYy2Nzcl64AueuN1piP8BvFVUU26DCm7hmerakTUxWewxmWI5uc06Abw/4A07H6vVzSkbe1kDdQAOy2W6hMh9hhL8KiwX0XVO2udzq6Xx6TaZYvyOHq+jqGC1t9ypMo9XF5DK4qKcFuBIZ8pWacMuzli32YJwN9BlwSyFnp34aYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+Mr7MyKcEdtMEJchwc2NhHc/0wnJqlqMaaamPQYu6oQ=; b=NpkXMakTJNGRdjIrfGLwA1GXGYRcZWY8vCpEzjqhwbDxr4iy2K4zftSMarcShsFjzGm2mL7/SS6RAqKJKJnC1GCXKYY/BJPsbWZlwGERON2ss4AR7A4TFTOjhYbEJ43Qtuo222ubIokuqRZpNobVqjHDfafxVGRdD+YW63y2hzo7vcLDi4vUpc6Po2cUEbb5LDyDxSuYkvplMtElWFoFHD4TDMn3lOIdXQ2yrH51CEr09cw/gxE2uDrPUr8TBoeIO2gwOzLvUwFAp87HDNo+Fw/akcKWS5eakc5NzWlDcpO5a5dIHFi2zTjtqXbgV0I7Np7QQhiVkEAaiIByfRypLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+Mr7MyKcEdtMEJchwc2NhHc/0wnJqlqMaaamPQYu6oQ=; b=XgGGqb87+0Nt4/NJsQxYOxx9V+GbCTcXdXLiOFJ/4T8Qo6LEyGaP//Ut8SXBBRhDEmF6Lu///Zb9hpBS6wL/eI70XxQ74mOM2dEojhWUIheIbuBZzewu9RVmfnkmKqYNUN6g+29wRbcwWHChqgL843uPP6E2bkN5i0hEzTCGpZGA2HfmzRy3E2rd3l9ZSgcMTcq+C0TNT7OdtzwT20EnZic1tDpX8UwQzzhpjxGH8Qhg9IXrDuD974HVdjfSHo+sjcQBHkCYVXwtHhD+iyGm8dEpx8hKKVm9QSFjDxFI7+mS8nhmvUIw6s4diEiFrfN13ddTekykXo7xxzNAbolFBw== Received: from DM6PR03CA0066.namprd03.prod.outlook.com (2603:10b6:5:100::43) by MN2PR12MB4223.namprd12.prod.outlook.com (2603:10b6:208:1d3::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31; Thu, 6 Jun 2024 10:12:34 +0000 Received: from CY4PEPF0000E9D0.namprd03.prod.outlook.com (2603:10b6:5:100:cafe::2) by DM6PR03CA0066.outlook.office365.com (2603:10b6:5:100::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.20 via Frontend Transport; Thu, 6 Jun 2024 10:12:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CY4PEPF0000E9D0.mail.protection.outlook.com (10.167.241.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.15 via Frontend Transport; Thu, 6 Jun 2024 10:12:34 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 03:12:29 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 6 Jun 2024 03:12:28 -0700 Received: from nvidia.com (10.127.8.14) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4 via Frontend Transport; Thu, 6 Jun 2024 03:12:26 -0700 From: Maayan Kashani To: CC: , , , Gregory Etelson , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v4 2/6] net/mlx5: support RSS expansion in non-template HWS setup Date: Thu, 6 Jun 2024 13:12:10 +0300 Message-ID: <20240606101214.172057-3-mkashani@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20240606101214.172057-1-mkashani@nvidia.com> References: <20240603105241.10482-1-mkashani@nvidia.com> <20240606101214.172057-1-mkashani@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D0:EE_|MN2PR12MB4223:EE_ X-MS-Office365-Filtering-Correlation-Id: 87fb7b0f-bffe-4efb-3744-08dc86112f17 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|376005|36860700004|82310400017|1800799015; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?989euCATPfNwnsoR4l73y1ExHbQHlLG6dlQ83ZdwOC2KvzIlO0KCHXtpuUdN?= =?us-ascii?Q?Tl+GZoZwFGvQpx5hgJ1yUESExd7nLq84va4WIr2l0YfwzgpvRCb/lLFdmOjf?= =?us-ascii?Q?ZNzFDzskoS6/ySt7n48fSRANhkDc6pOE8zd+/D3dV8e0/lx+64IpYq40Xoa4?= =?us-ascii?Q?LvNcJ0RziGk9CidEd9mEI2uq0Rodu1S3X7V48r5Nhm2B7ZLjH7thUbk6DL9i?= =?us-ascii?Q?MqPg64E2G0S/p4IYxRFcJJDiihlefyBSzlOBsoyIoYL2wJAvJR0tHHT69rEB?= =?us-ascii?Q?aBGZ4yIdHpWZ9k5CmaP7ClPwlIX20cPSmv+oCvp3u9hfS2+orupUSIUSwIht?= =?us-ascii?Q?2HwyxJSI/dpvwc2TmY6Nk/M2i/mlAkX3xM6x9T3ULYrD7QiYuUPTeFCpLp9b?= =?us-ascii?Q?9LxBeor9B8tLcIVUCVkffiJDFjIiRCT6BMR0L59BefZTacQAoE2/YuxsBfMr?= =?us-ascii?Q?5OHQXMk8vCzmStQTH3CRx0U5vvp1BHhKTuTfS30IgYL0PHvHGb/BiXvqrcsA?= =?us-ascii?Q?U2Ar5rif1xt3xA0v9zeRTXLNXAkmjJ8YJCutLsUiwftXEA993r6jWMj8C0kc?= =?us-ascii?Q?hZNflLP/WoiZy4CqPdwtihXrkq8HRCtFPbLB/lkMxd7BjjLeObbGXMZpJCPU?= =?us-ascii?Q?xnbc2JVsYJXy/JUYsWCv2mgS+WkMfJC2vNX+ja1aBxSNLkfx1lapu3ASOVS2?= =?us-ascii?Q?l8W3JIRqCzjYE6qITOXK8xH+yPj9AiYS+Jx/BoLkcNOv4auzTva154qbZnt2?= =?us-ascii?Q?Q5nq7N4R4IJol7WOrfw9JWsKbXqPiOuRvdcYZ4QIMvFt7xS2DWkXrB0SWi1D?= =?us-ascii?Q?eQQkIXQv7AblsJYv3rHPWj2/hWIgPi/fuZ8PoDBrF/Ra2joyfajSN2YnySQZ?= =?us-ascii?Q?U+ztD6b/RU6764m98GDCoMV8LTMUX3KT//6hZZIsMEmCw+/tOCZ00OsvRaHb?= =?us-ascii?Q?BV1a7lU9vhkAlk5uqMIOzHD+gcIG/6CRCTpgE7mRqjNzvV0cxbu8sCtEapox?= =?us-ascii?Q?ifeCwGQA3l3BSXLO5bFSwkoL2h9bo6Ub4mRB9K4ERljocAXRix9lnU6jhCpj?= =?us-ascii?Q?NyyD5T/t5sXZuYnEYCmtm4WckfpLuTR1KXTmx8B4zh8KoY9wUy2nl+l80ISL?= =?us-ascii?Q?0OfvBBC2dszuh+FSYagFBuv6c9uboyULAv5z/+XQH2RS7ckz3PfylYvIiySh?= =?us-ascii?Q?k3UOVhP+DjFSCj1MnrGIIuz21yOlC/oANMGTSCokPpTKXFYctIOPZ5OTQ7Em?= =?us-ascii?Q?/ym3RB2LQLKID02CYryBdjr2BZqRhr7VUPaslhp+C3K5l7Zj6ak1ueAs3RST?= =?us-ascii?Q?aS/+Jlbmzu7bR5Gw8sY8r+m4LPiDkWCmsr8L+yIsO1ScOQAEsN/9YhSuk8EW?= =?us-ascii?Q?P7YU+Vpn5TAVSpOJeCDMHhjs1vXyCtwYHCRtga2W79G9T9qFIA=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230031)(376005)(36860700004)(82310400017)(1800799015); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2024 10:12:34.1290 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 87fb7b0f-bffe-4efb-3744-08dc86112f17 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D0.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4223 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson The MLX5 PMD expands flow rule with the RSS action in the non-template environment. The patch adds RSS flow rule expansion for legacy flow rules in the template setup. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/hws/mlx5dr_definer.c | 2 + drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 36 +- drivers/net/mlx5/mlx5_flow_hw.c | 65 +-- drivers/net/mlx5/mlx5_nta_rss.c | 564 ++++++++++++++++++++++++++ 7 files changed, 646 insertions(+), 32 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_nta_rss.c diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 4d297352a6..29046ee875 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -381,6 +381,8 @@ mlx5dr_definer_ptype_l4_set(struct mlx5dr_definer_fc *fc, l4_type = STE_UDP; else if (packet_type == (inner ? RTE_PTYPE_INNER_L4_ICMP : RTE_PTYPE_L4_ICMP)) l4_type = STE_ICMP; + else if (packet_type == RTE_PTYPE_TUNNEL_ESP) + l4_type = STE_ESP; DR_SET(tag, l4_type, fc->byte_off, fc->bit_off, fc->bit_mask); } diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index d705fe21bb..b279ddf47c 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -42,6 +42,7 @@ sources = files( 'mlx5_vlan.c', 'mlx5_utils.c', 'mlx5_devx.c', + 'mlx5_nta_rss.c', ) if is_linux diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d15302d00d..5bde450a6d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2365,6 +2365,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) claim_zero(mlx5_geneve_tlv_options_destroy(priv->tlv_options, sh->phdev)); priv->tlv_options = NULL; } + if (priv->ptype_rss_groups) { + mlx5_ipool_destroy(priv->ptype_rss_groups); + priv->ptype_rss_groups = NULL; + } #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e635907c52..1b55229c52 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1184,6 +1184,10 @@ struct mlx5_flow_tbl_resource { #define MLX5_MAX_TABLES_EXTERNAL MLX5_FLOW_TABLE_LEVEL_POLICY #define MLX5_FLOW_TABLE_HWS_POLICY (MLX5_MAX_TABLES - 10) #define MLX5_MAX_TABLES_FDB UINT16_MAX +#define MLX5_FLOW_TABLE_PTYPE_RSS_NUM 1024 +#define MLX5_FLOW_TABLE_PTYPE_RSS_LAST (MLX5_MAX_TABLES - 11) +#define MLX5_FLOW_TABLE_PTYPE_RSS_BASE \ +(1 + MLX5_FLOW_TABLE_PTYPE_RSS_LAST - MLX5_FLOW_TABLE_PTYPE_RSS_NUM) #define MLX5_FLOW_TABLE_FACTOR 10 /* ID generation structure. */ @@ -2019,7 +2023,7 @@ struct mlx5_priv { * Todo: consider to add *_MAX macro. */ struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2]; - + struct mlx5_indexed_pool *ptype_rss_groups; #endif struct rte_eth_dev *shared_host; /* Host device for HW steering. */ RTE_ATOMIC(uint16_t) shared_refcnt; /* HW steering host reference counter. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 7ccc3cb7cd..7e0f005741 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -484,6 +484,9 @@ enum mlx5_feature_name { RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ RTE_ETH_RSS_NONFRAG_IPV4_OTHER) +/* Valid L4 RSS types */ +#define MLX5_L4_RSS_TYPES (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) + /* IBV hash source bits for IPV4. */ #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4) @@ -1313,6 +1316,8 @@ enum { #define MLX5_DR_RULE_SIZE 72 +SLIST_HEAD(mlx5_nta_rss_flow_head, rte_flow_hw); + /** HWS non template flow data. */ struct rte_flow_nt2hws { /** BWC rule pointer. */ @@ -1325,7 +1330,10 @@ struct rte_flow_nt2hws { struct mlx5_flow_dv_modify_hdr_resource *modify_hdr; /** Encap/decap index. */ uint32_t rix_encap_decap; -}; + uint8_t chaned_flow; + /** Chain NTA flows. */ + SLIST_ENTRY(rte_flow_hw) next; +} __rte_packed; /** HWS flow struct. */ struct rte_flow_hw { @@ -3415,7 +3423,6 @@ flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) #endif return 0; } - void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); #ifdef HAVE_MLX5_HWS_SUPPORT @@ -3428,5 +3435,30 @@ mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, void mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, struct mlx5_indirect_list *reformat); +int +flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + uint64_t item_flags, uint64_t action_flags, bool external, + struct rte_flow_hw **flow, struct rte_flow_error *error); +void +flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow); +void +flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, + uintptr_t flow_idx); +const struct rte_flow_action_rss * +flow_nta_locate_rss(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], + struct rte_flow_error *error); +struct rte_flow_hw * +flow_nta_handle_rss(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + const struct rte_flow_action_rss *rss_conf, + uint64_t item_flags, uint64_t action_flags, + bool external, enum mlx5_flow_type flow_type, + struct rte_flow_error *error); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 696f675f63..7984bf2f73 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -492,7 +492,7 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc, fields |= IBV_RX_HASH_IPSEC_SPI; if (rss_inner) fields |= IBV_RX_HASH_INNER; - *hash_fields = fields; + *hash_fields |= fields; } /** @@ -755,9 +755,7 @@ flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) static inline struct mlx5_hrxq* flow_hw_tir_action_register(struct rte_eth_dev *dev, uint32_t hws_flags, - const struct rte_flow_action *action, - uint64_t item_flags, - bool is_template) + const struct rte_flow_action *action) { struct mlx5_flow_rss_desc rss_desc = { .hws_flags = hws_flags, @@ -780,10 +778,7 @@ flow_hw_tir_action_register(struct rte_eth_dev *dev, rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN; rss_desc.types = !rss->types ? RTE_ETH_RSS_IP : rss->types; rss_desc.symmetric_hash_function = MLX5_RSS_IS_SYMM(rss->func); - if (is_template) - flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields); - else - flow_dv_hashfields_set(item_flags, &rss_desc, &rss_desc.hash_fields); + flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields); flow_dv_action_rss_l34_hash_adjust(rss->types, &rss_desc.hash_fields); if (rss->level > 1) { @@ -2508,9 +2503,8 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, ((const struct rte_flow_action_queue *) masks->conf)->index) { acts->tir = flow_hw_tir_action_register - (dev, - mlx5_hw_act_flag[!!attr->group][type], - actions, 0, true); + (dev, mlx5_hw_act_flag[!!attr->group][type], + actions); if (!acts->tir) goto err; acts->rule_acts[dr_pos].action = @@ -2524,9 +2518,8 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RSS: if (actions->conf && masks->conf) { acts->tir = flow_hw_tir_action_register - (dev, - mlx5_hw_act_flag[!!attr->group][type], - actions, 0, true); + (dev, mlx5_hw_act_flag[!!attr->group][type], + actions); if (!acts->tir) goto err; acts->rule_acts[dr_pos].action = @@ -3413,11 +3406,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: - hrxq = flow_hw_tir_action_register(dev, - ft_flag, - action, - item_flags, - !flow->nt_rule); + hrxq = flow_hw_tir_action_register(dev, ft_flag, action); if (!hrxq) goto error; rule_acts[act_data->action_dst].action = hrxq->action; @@ -12735,7 +12724,7 @@ static int flow_hw_apply(struct rte_eth_dev *dev __rte_unused, * @return * 0 on success, negative errno value otherwise and rte_errno set. */ -static int +int flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -12848,7 +12837,7 @@ flow_hw_create_flow(struct rte_eth_dev *dev, enum mlx5_flow_type type, } #endif -static void +void flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) { int ret; @@ -12903,18 +12892,23 @@ flow_hw_destroy(struct rte_eth_dev *dev, struct rte_flow_hw *flow) * @param[in] flow_addr * Address of flow to destroy. */ -static void flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, - uintptr_t flow_addr) +void +flow_hw_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, + uintptr_t flow_addr) { struct mlx5_priv *priv = dev->data->dev_private; - /* Get flow via idx */ struct rte_flow_hw *flow = (struct rte_flow_hw *)flow_addr; + struct mlx5_nta_rss_flow_head head = { .slh_first = flow }; - if (!flow) + if (flow->nt2hws->chaned_flow) return; - flow_hw_destroy(dev, flow); - /* Release flow memory by idx */ - mlx5_ipool_free(priv->flows[type], flow->idx); + while (!SLIST_EMPTY(&head)) { + flow = SLIST_FIRST(&head); + SLIST_REMOVE_HEAD(&head, nt2hws->next); + flow_hw_destroy(dev, flow); + /* Release flow memory by idx */ + mlx5_ipool_free(priv->flows[type], flow->idx); + } } #endif @@ -12952,6 +12946,19 @@ static uintptr_t flow_hw_list_create(struct rte_eth_dev *dev, uint64_t item_flags = flow_hw_matching_item_flags_get(items); uint64_t action_flags = flow_hw_action_flags_get(actions, error); + + if (action_flags & MLX5_FLOW_ACTION_RSS) { + const struct rte_flow_action_rss + *rss_conf = flow_nta_locate_rss(dev, actions, error); + flow = flow_nta_handle_rss(dev, attr, items, actions, rss_conf, + item_flags, action_flags, external, + type, error); + if (flow) + return (uintptr_t)flow; + if (error->type != RTE_FLOW_ERROR_TYPE_NONE) + return 0; + /* Fall Through to non-expanded RSS flow */ + } /*TODO: Handle split/expand to num_flows. */ /* Create single flow. */ @@ -13111,7 +13118,7 @@ mirror_format_tir(struct rte_eth_dev *dev, table_type = get_mlx5dr_table_type(&table_cfg->attr.flow_attr); hws_flags = mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_NONE_ROOT][table_type]; - tir_ctx = flow_hw_tir_action_register(dev, hws_flags, action, 0, true); + tir_ctx = flow_hw_tir_action_register(dev, hws_flags, action); if (!tir_ctx) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, diff --git a/drivers/net/mlx5/mlx5_nta_rss.c b/drivers/net/mlx5/mlx5_nta_rss.c new file mode 100644 index 0000000000..1f0085ff06 --- /dev/null +++ b/drivers/net/mlx5/mlx5_nta_rss.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 NVIDIA Corporation & Affiliates + */ + +#include + +#include +#include "mlx5.h" +#include "mlx5_defs.h" +#include "mlx5_flow.h" +#include "mlx5_rx.h" +#include "rte_common.h" + +#ifdef HAVE_MLX5_HWS_SUPPORT + +struct mlx5_nta_rss_ctx { + struct rte_eth_dev *dev; + struct rte_flow_attr *attr; + struct rte_flow_item *pattern; + struct rte_flow_action *actions; + const struct rte_flow_action_rss *rss_conf; + struct rte_flow_error *error; + struct mlx5_nta_rss_flow_head *head; + uint64_t pattern_flags; + enum mlx5_flow_type flow_type; + bool external; +}; + +#define MLX5_RSS_PTYPE_ITEM_INDEX 0 +#ifdef MLX5_RSS_PTYPE_DEBUG +#define MLX5_RSS_PTYPE_ACTION_INDEX 1 +#else +#define MLX5_RSS_PTYPE_ACTION_INDEX 0 +#endif + +#define MLX5_RSS_PTYPE_ITEMS_NUM (MLX5_RSS_PTYPE_ITEM_INDEX + 2) +#define MLX5_RSS_PTYPE_ACTIONS_NUM (MLX5_RSS_PTYPE_ACTION_INDEX + 2) + +static int +mlx5_nta_ptype_rss_flow_create(struct mlx5_nta_rss_ctx *ctx, + uint32_t ptype, uint64_t rss_type) +{ + int ret; + struct rte_flow_hw *flow; + struct rte_flow_item_ptype *ptype_spec = (void *)(uintptr_t) + ctx->pattern[MLX5_RSS_PTYPE_ITEM_INDEX].spec; + struct rte_flow_action_rss *rss_conf = (void *)(uintptr_t) + ctx->actions[MLX5_RSS_PTYPE_ACTION_INDEX].conf; + bool dbg_log = rte_log_can_log(mlx5_logtype, RTE_LOG_DEBUG); + uint32_t mark_id = 0; +#ifdef MLX5_RSS_PTYPE_DEBUG + struct rte_flow_action_mark *mark = (void *)(uintptr_t) + ctx->actions[MLX5_RSS_PTYPE_ACTION_INDEX - 1].conf; + + /* + * Inner L3 and L4 ptype values are too large for 24bit mark + */ + mark->id = + ((ptype & (RTE_PTYPE_INNER_L3_MASK | RTE_PTYPE_INNER_L4_MASK)) == ptype) ? + ptype >> 20 : ptype; + mark_id = mark->id; + dbg_log = true; +#endif + ptype_spec->packet_type = ptype; + rss_conf->types = rss_type; + ret = flow_hw_create_flow(ctx->dev, MLX5_FLOW_TYPE_GEN, ctx->attr, + ctx->pattern, ctx->actions, + MLX5_FLOW_ITEM_PTYPE, MLX5_FLOW_ACTION_RSS, + ctx->external, &flow, ctx->error); + if (flow) { + SLIST_INSERT_HEAD(ctx->head, flow, nt2hws->next); + if (dbg_log) { + DRV_LOG(NOTICE, + "PTYPE RSS: group %u ptype spec %#x rss types %#lx mark %#x\n", + ctx->attr->group, ptype_spec->packet_type, + (unsigned long)rss_conf->types, mark_id); + } + } + return ret; +} + +/* + * Call conditions: + * * Flow pattern did not include outer L3 and L4 items. + * * RSS configuration had L3 hash types. + */ +static struct rte_flow_hw * +mlx5_hw_rss_expand_l3(struct mlx5_nta_rss_ctx *rss_ctx) +{ + int ret; + int ptype_ip4, ptype_ip6; + uint64_t rss_types = rte_eth_rss_hf_refine(rss_ctx->rss_conf->types); + + if (rss_ctx->rss_conf->level < 2) { + ptype_ip4 = RTE_PTYPE_L3_IPV4; + ptype_ip6 = RTE_PTYPE_L3_IPV6; + } else { + ptype_ip4 = RTE_PTYPE_INNER_L3_IPV4; + ptype_ip6 = RTE_PTYPE_INNER_L3_IPV6; + } + if (rss_types & MLX5_IPV4_LAYER_TYPES) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_ip4, (rss_types & ~MLX5_IPV6_LAYER_TYPES)); + if (ret) + goto error; + } + if (rss_types & MLX5_IPV6_LAYER_TYPES) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_ip6, rss_types & ~MLX5_IPV4_LAYER_TYPES); + if (ret) + goto error; + } + return SLIST_FIRST(rss_ctx->head); + +error: + flow_hw_list_destroy(rss_ctx->dev, rss_ctx->flow_type, + (uintptr_t)SLIST_FIRST(rss_ctx->head)); + return NULL; +} + +static void +mlx5_nta_rss_expand_l3_l4(struct mlx5_nta_rss_ctx *rss_ctx, + uint64_t rss_types, uint64_t rss_l3_types) +{ + int ret; + int ptype_l3, ptype_l4_udp, ptype_l4_tcp, ptype_l4_esp = 0; + uint64_t rss = rss_types & + ~(rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + MLX5_IPV6_LAYER_TYPES : MLX5_IPV4_LAYER_TYPES); + + + if (rss_ctx->rss_conf->level < 2) { + ptype_l3 = rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + RTE_PTYPE_L3_IPV4 : RTE_PTYPE_L3_IPV6; + ptype_l4_esp = RTE_PTYPE_TUNNEL_ESP; + ptype_l4_udp = RTE_PTYPE_L4_UDP; + ptype_l4_tcp = RTE_PTYPE_L4_TCP; + } else { + ptype_l3 = rss_l3_types == MLX5_IPV4_LAYER_TYPES ? + RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_INNER_L3_IPV6; + ptype_l4_udp = RTE_PTYPE_INNER_L4_UDP; + ptype_l4_tcp = RTE_PTYPE_INNER_L4_TCP; + } + if (rss_types & RTE_ETH_RSS_ESP) { + ret = mlx5_nta_ptype_rss_flow_create + (rss_ctx, ptype_l3 | ptype_l4_esp, + rss & ~(RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)); + if (ret) + goto error; + } + if (rss_types & RTE_ETH_RSS_UDP) { + ret = mlx5_nta_ptype_rss_flow_create(rss_ctx, + ptype_l3 | ptype_l4_udp, + rss & ~(RTE_ETH_RSS_ESP | RTE_ETH_RSS_TCP)); + if (ret) + goto error; + } + if (rss_types & RTE_ETH_RSS_TCP) { + ret = mlx5_nta_ptype_rss_flow_create(rss_ctx, + ptype_l3 | ptype_l4_tcp, + rss & ~(RTE_ETH_RSS_ESP | RTE_ETH_RSS_UDP)); + if (ret) + goto error; + } + return; +error: + flow_hw_list_destroy(rss_ctx->dev, rss_ctx->flow_type, + (uintptr_t)SLIST_FIRST(rss_ctx->head)); +} + +/* + * Call conditions: + * * Flow pattern did not include L4 item. + * * RSS configuration had L4 hash types. + */ +static struct rte_flow_hw * +mlx5_hw_rss_expand_l4(struct mlx5_nta_rss_ctx *rss_ctx) +{ + uint64_t rss_types = rte_eth_rss_hf_refine(rss_ctx->rss_conf->types); + uint64_t l3_item = rss_ctx->pattern_flags & + (rss_ctx->rss_conf->level < 2 ? + MLX5_FLOW_LAYER_OUTER_L3 : MLX5_FLOW_LAYER_INNER_L3); + + if (l3_item) { + /* + * Outer L3 header was present in the original pattern. + * Expand L4 level only. + */ + if (l3_item & MLX5_FLOW_LAYER_L3_IPV4) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, MLX5_IPV4_LAYER_TYPES); + else + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, MLX5_IPV6_LAYER_TYPES); + } else { + if (rss_types & (MLX5_IPV4_LAYER_TYPES | MLX5_IPV6_LAYER_TYPES)) { + mlx5_hw_rss_expand_l3(rss_ctx); + /* + * No outer L3 item in application flow pattern. + * RSS hash types are L3 and L4. + * ** Expand L3 according to RSS configuration and L4. + */ + if (rss_types & MLX5_IPV4_LAYER_TYPES) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV4_LAYER_TYPES); + if (rss_types & MLX5_IPV6_LAYER_TYPES) + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV6_LAYER_TYPES); + } else { + /* + * No outer L3 item in application flow pattern, + * RSS hash type is L4 only. + */ + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV4_LAYER_TYPES); + mlx5_nta_rss_expand_l3_l4(rss_ctx, rss_types, + MLX5_IPV6_LAYER_TYPES); + } + } + return SLIST_EMPTY(rss_ctx->head) ? NULL : SLIST_FIRST(rss_ctx->head); +} + +static struct mlx5_indexed_pool * +mlx5_nta_ptype_ipool_create(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool_config ipool_cfg = { + .size = 1, + .trunk_size = 32, + .grow_trunk = 5, + .grow_shift = 1, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = MLX5_FLOW_TABLE_PTYPE_RSS_NUM, + .free = mlx5_free, + .type = "mlx5_nta_ptype_rss" + }; + return mlx5_ipool_create(&ipool_cfg); +} + +static void +mlx5_hw_release_rss_ptype_group(struct rte_eth_dev *dev, uint32_t group) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->ptype_rss_groups) + return; + mlx5_ipool_free(priv->ptype_rss_groups, group); +} + +static uint32_t +mlx5_hw_get_rss_ptype_group(struct rte_eth_dev *dev) +{ + void *obj; + uint32_t idx = 0; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->ptype_rss_groups) { + priv->ptype_rss_groups = mlx5_nta_ptype_ipool_create(dev); + if (!priv->ptype_rss_groups) { + DRV_LOG(DEBUG, "PTYPE RSS: failed to allocate groups pool"); + return 0; + } + } + obj = mlx5_ipool_malloc(priv->ptype_rss_groups, &idx); + if (!obj) { + DRV_LOG(DEBUG, "PTYPE RSS: failed to fetch ptype group from the pool"); + return 0; + } + return idx + MLX5_FLOW_TABLE_PTYPE_RSS_BASE; +} + +static struct rte_flow_hw * +mlx5_hw_rss_ptype_create_miss_flow(struct rte_eth_dev *dev, + const struct rte_flow_action_rss *rss_conf, + uint32_t ptype_group, bool external, + struct rte_flow_error *error) +{ + struct rte_flow_hw *flow = NULL; + const struct rte_flow_attr miss_attr = { + .ingress = 1, + .group = ptype_group, + .priority = 3 + }; + const struct rte_flow_item miss_pattern[2] = { + [0] = { .type = RTE_FLOW_ITEM_TYPE_ETH }, + [1] = { .type = RTE_FLOW_ITEM_TYPE_END } + }; + struct rte_flow_action miss_actions[] = { +#ifdef MLX5_RSS_PTYPE_DEBUG + [MLX5_RSS_PTYPE_ACTION_INDEX - 1] = { + .type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(const struct rte_flow_action_mark){.id = 0xfac} + }, +#endif + [MLX5_RSS_PTYPE_ACTION_INDEX] = { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = rss_conf + }, + [MLX5_RSS_PTYPE_ACTION_INDEX + 1] = { .type = RTE_FLOW_ACTION_TYPE_END } + }; + + flow_hw_create_flow(dev, MLX5_FLOW_TYPE_GEN, &miss_attr, + miss_pattern, miss_actions, 0, MLX5_FLOW_ACTION_RSS, + external, &flow, error); + return flow; +} + +static struct rte_flow_hw * +mlx5_hw_rss_ptype_create_base_flow(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action orig_actions[], + uint32_t ptype_group, uint64_t item_flags, + uint64_t action_flags, bool external, + enum mlx5_flow_type flow_type, + struct rte_flow_error *error) +{ + int i = 0; + struct rte_flow_hw *flow = NULL; + struct rte_flow_action actions[MLX5_HW_MAX_ACTS]; + enum mlx5_indirect_type indirect_type; + + do { + switch (orig_actions[i].type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + indirect_type = (typeof(indirect_type)) + MLX5_INDIRECT_ACTION_TYPE_GET + (orig_actions[i].conf); + if (indirect_type != MLX5_INDIRECT_ACTION_TYPE_RSS) { + actions[i] = orig_actions[i]; + break; + } + /* Fall through */ + case RTE_FLOW_ACTION_TYPE_RSS: + actions[i].type = RTE_FLOW_ACTION_TYPE_JUMP; + actions[i].conf = &(const struct rte_flow_action_jump) { + .group = ptype_group + }; + break; + default: + actions[i] = orig_actions[i]; + } + + } while (actions[i++].type != RTE_FLOW_ACTION_TYPE_END); + action_flags &= ~MLX5_FLOW_ACTION_RSS; + action_flags |= MLX5_FLOW_ACTION_JUMP; + flow_hw_create_flow(dev, flow_type, attr, pattern, actions, + item_flags, action_flags, external, &flow, error); + return flow; +} + +const struct rte_flow_action_rss * +flow_nta_locate_rss(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + const struct rte_flow_action *a; + const struct rte_flow_action_rss *rss_conf = NULL; + + for (a = actions; a->type != RTE_FLOW_ACTION_TYPE_END; a++) { + if (a->type == RTE_FLOW_ACTION_TYPE_RSS) { + rss_conf = a->conf; + break; + } + if (a->type == RTE_FLOW_ACTION_TYPE_INDIRECT && + MLX5_INDIRECT_ACTION_TYPE_GET(a->conf) == + MLX5_INDIRECT_ACTION_TYPE_RSS) { + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_shared_action_rss *shared_rss; + uint32_t handle = (uint32_t)(uintptr_t)a->conf; + + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], + MLX5_INDIRECT_ACTION_IDX_GET(handle)); + if (!shared_rss) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + a->conf, "invalid shared RSS handle"); + return NULL; + } + rss_conf = &shared_rss->origin; + break; + } + } + if (a->type == RTE_FLOW_ACTION_TYPE_END) { + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + return rss_conf; +} + +static __rte_always_inline void +mlx5_nta_rss_init_ptype_ctx(struct mlx5_nta_rss_ctx *rss_ctx, + struct rte_eth_dev *dev, + struct rte_flow_attr *ptype_attr, + struct rte_flow_item *ptype_pattern, + struct rte_flow_action *ptype_actions, + const struct rte_flow_action_rss *rss_conf, + struct mlx5_nta_rss_flow_head *head, + struct rte_flow_error *error, + uint64_t item_flags, + enum mlx5_flow_type flow_type, bool external) +{ + rss_ctx->dev = dev; + rss_ctx->attr = ptype_attr; + rss_ctx->pattern = ptype_pattern; + rss_ctx->actions = ptype_actions; + rss_ctx->rss_conf = rss_conf; + rss_ctx->error = error; + rss_ctx->head = head; + rss_ctx->pattern_flags = item_flags; + rss_ctx->flow_type = flow_type; + rss_ctx->external = external; +} + +/* + * MLX5 HW hashes IPv4 and IPv6 L3 headers and UDP, TCP, ESP L4 headers. + * RSS expansion is required when RSS action was configured to hash + * network protocol that was not mentioned in flow pattern. + * + */ +#define MLX5_PTYPE_RSS_OUTER_MASK (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6 | \ + RTE_PTYPE_L4_UDP | RTE_PTYPE_L4_TCP | \ + RTE_PTYPE_TUNNEL_ESP) +#define MLX5_PTYPE_RSS_INNER_MASK (RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L3_IPV6 | \ + RTE_PTYPE_INNER_L4_TCP | RTE_PTYPE_INNER_L4_UDP) + +struct rte_flow_hw * +flow_nta_handle_rss(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + const struct rte_flow_action_rss *rss_conf, + uint64_t item_flags, uint64_t action_flags, + bool external, enum mlx5_flow_type flow_type, + struct rte_flow_error *error) +{ + struct rte_flow_hw *rss_base = NULL, *rss_next = NULL, *rss_miss = NULL; + struct rte_flow_action_rss ptype_rss_conf; + struct mlx5_nta_rss_ctx rss_ctx; + uint64_t rss_types = rte_eth_rss_hf_refine(rss_conf->types); + bool inner_rss = rss_conf->level > 1; + bool outer_rss = !inner_rss; + bool l3_item = (outer_rss && (item_flags & MLX5_FLOW_LAYER_OUTER_L3)) || + (inner_rss && (item_flags & MLX5_FLOW_LAYER_INNER_L3)); + bool l4_item = (outer_rss && (item_flags & MLX5_FLOW_LAYER_OUTER_L4)) || + (inner_rss && (item_flags & MLX5_FLOW_LAYER_INNER_L4)); + bool l3_hash = rss_types & (MLX5_IPV4_LAYER_TYPES | MLX5_IPV6_LAYER_TYPES); + bool l4_hash = rss_types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_ESP); + struct mlx5_nta_rss_flow_head expansion_head = SLIST_HEAD_INITIALIZER(0); + struct rte_flow_attr ptype_attr = { + .ingress = 1 + }; + struct rte_flow_item_ptype ptype_spec = { .packet_type = 0 }; + const struct rte_flow_item_ptype ptype_mask = { + .packet_type = outer_rss ? + MLX5_PTYPE_RSS_OUTER_MASK : MLX5_PTYPE_RSS_INNER_MASK + }; + struct rte_flow_item ptype_pattern[MLX5_RSS_PTYPE_ITEMS_NUM] = { + [MLX5_RSS_PTYPE_ITEM_INDEX] = { + .type = RTE_FLOW_ITEM_TYPE_PTYPE, + .spec = &ptype_spec, + .mask = &ptype_mask + }, + [MLX5_RSS_PTYPE_ITEM_INDEX + 1] = { .type = RTE_FLOW_ITEM_TYPE_END } + }; + struct rte_flow_action ptype_actions[MLX5_RSS_PTYPE_ACTIONS_NUM] = { +#ifdef MLX5_RSS_PTYPE_DEBUG + [MLX5_RSS_PTYPE_ACTION_INDEX - 1] = { + .type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(const struct rte_flow_action_mark) {.id = 101} + }, +#endif + [MLX5_RSS_PTYPE_ACTION_INDEX] = { + .type = RTE_FLOW_ACTION_TYPE_RSS, + .conf = &ptype_rss_conf + }, + [MLX5_RSS_PTYPE_ACTION_INDEX + 1] = { .type = RTE_FLOW_ACTION_TYPE_END } + }; + + if (l4_item) { + /* + * Original flow pattern extended up to L4 level. + * L4 is the maximal expansion level. + * Original pattern does not need expansion. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + if (!l4_hash) { + if (!l3_hash) { + /* + * RSS action was not configured to hash L3 or L4. + * No expansion needed. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + if (l3_item) { + /* + * Original flow pattern extended up to L3 level. + * RSS action was not set for L4 hash. + * Original pattern does not need expansion. + */ + rte_flow_error_set(error, 0, RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL); + return NULL; + } + } + /* Create RSS expansions in dedicated PTYPE flow group */ + ptype_attr.group = mlx5_hw_get_rss_ptype_group(dev); + if (!ptype_attr.group) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, "cannot get RSS PTYPE group"); + return NULL; + } + ptype_rss_conf = *rss_conf; + mlx5_nta_rss_init_ptype_ctx(&rss_ctx, dev, &ptype_attr, ptype_pattern, + ptype_actions, rss_conf, &expansion_head, + error, item_flags, flow_type, external); + rss_miss = mlx5_hw_rss_ptype_create_miss_flow(dev, rss_conf, ptype_attr.group, + external, error); + if (!rss_miss) + goto error; + if (l4_hash) { + rss_next = mlx5_hw_rss_expand_l4(&rss_ctx); + if (!rss_next) + goto error; + } else if (l3_hash) { + rss_next = mlx5_hw_rss_expand_l3(&rss_ctx); + if (!rss_next) + goto error; + } + rss_base = mlx5_hw_rss_ptype_create_base_flow(dev, attr, items, actions, + ptype_attr.group, item_flags, + action_flags, external, + flow_type, error); + if (!rss_base) + goto error; + SLIST_INSERT_HEAD(&expansion_head, rss_miss, nt2hws->next); + SLIST_INSERT_HEAD(&expansion_head, rss_base, nt2hws->next); + /** + * PMD must return to application a reference to the base flow. + * This way RSS expansion could work with counter, meter and other + * flow actions. + */ + MLX5_ASSERT(rss_base == SLIST_FIRST(&expansion_head)); + rss_next = SLIST_NEXT(rss_base, nt2hws->next); + while (rss_next) { + rss_next->nt2hws->chaned_flow = 1; + rss_next = SLIST_NEXT(rss_next, nt2hws->next); + } + return SLIST_FIRST(&expansion_head); + +error: + if (rss_miss) + flow_hw_list_destroy(dev, flow_type, (uintptr_t)rss_miss); + if (rss_next) + flow_hw_list_destroy(dev, flow_type, (uintptr_t)rss_next); + mlx5_hw_release_rss_ptype_group(dev, ptype_attr.group); + return NULL; +} + +#endif + -- 2.21.0