From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AFC54296B; Mon, 17 Apr 2023 11:27:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEBAD42D3A; Mon, 17 Apr 2023 11:26:32 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2071.outbound.protection.outlook.com [40.107.223.71]) by mails.dpdk.org (Postfix) with ESMTP id 30E1B42D46 for ; Mon, 17 Apr 2023 11:26:31 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ODzE5KPmV+HX1cIZW+1ZoZnsqznjqzbihLzNl3FcIizZkVwW+Swoqes38cN125CQJ19SLMVdo/NZ4QldqgagnpCt2wpNOIpa9yr+OSRV2/ydyKDvMTGQ6Vj4SdpnGF9yD4SHt4IGw8Fm0HUTmme8dQYwzMifBWdHX+Qb6l8Z5IWXmVWqJ9dE+bI1ITD1+7BjYF3p+k1fPlJaN/oepLjeNg1nknrdtn9FHoG5SkAc0Uq+dIGU2eMDxjNcWA8msGpxsLdJjsEsfKClrlHwvhycPVs5BdU1mQlUFHIeFAbgU7YiO2k4cmVAJkznqvJYF56s/JtKhBBmDUOlXaTVM1TeRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w1SK9LajwGJym4iywBUZ5fK8FpkFDBxh+I32s7ssKrk=; b=BrExd4Rkqgud6SQEUU/ztjanvYhsUc5NoYKab9p5grRSt/tgnRZ5TxtzYHZJI7dZQDOtXtARlWIPYZn7d0dc2Gew0dU7+slG4c96/HQHpt9SBaana4+hijB7eqq4WpYRCjVUyuErluVVb+N2OuvH0ZdeZi1n1qHXLrB1ZvkC4G8T1j3oEZHh67vX59rKRSITvJgkedqaxwT0SlHhBNCB6oZ3BkSbJM4ecl+l7idempo7eE3fp1Rsv3kv/O+MCr9k/kUqAVjd/MnpIyPTVjWdbnbRmTY4srq+YuAdiBFT5YJeKEq9i1EScSVFLBG1MypIc2bAHhgPZWp7kYy4+oNMWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w1SK9LajwGJym4iywBUZ5fK8FpkFDBxh+I32s7ssKrk=; b=FYrR4ZlFLm4OVnyThgFaY6Frm/OCrQCiAcDwasYggKyA6IRaV5obBTqNnEnbAxwZJIh/fseH3rw3YMqyeamULriJ0zlvgUOIwlBYrSbowlLnmZ5EEbY5YLE73TNHuzETO/v1I8+dzOepuvSUq6R34S75XlWctWp/xCXFXSuY3VYHPMzvfTxgJ5TyxrP6poKDP6t4rJtMp8W9wEO+B93LKxk/VpNCFNriEFHfoDZL+zVcaOCqWzNU94+SttbMDDX30P7uH3vweuynaZ0YX73h882C4A/94T06FOA3uccMq1pgQG/fzRI+FvdtbEvb5zwBlC4R4ILlI4NdiD3SYyO8Ew== Received: from MW4P222CA0025.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::30) by SJ2PR12MB7821.namprd12.prod.outlook.com (2603:10b6:a03:4d2::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:27 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::98) by MW4P222CA0025.outlook.office365.com (2603:10b6:303:114::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:08 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:06 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Date: Mon, 17 Apr 2023 12:25:38 +0300 Message-ID: <20230417092540.2617450-7-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT036:EE_|SJ2PR12MB7821:EE_ X-MS-Office365-Filtering-Correlation-Id: 0c747d45-3ebc-416a-d070-08db3f25d25d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +3n9hIXsdbXgdPM3Pze/aaeQvnq4DBBLwnfLQEWn2bHkcxOw5mC5CjuMRt8GxDm1IYPbpxz0vL9vH/vOnpVoCXcwF3dQ2LSiMrF1BRTKm915QFkomERoo06eIKjjL9kBUaVNC2ISOI0bPzF95BmzzE4v67cAsXtUgL+IXqm6BPpR2uyF1ta+AV1CNvMI8iVKPgLFyNPF4Ccbh6BvNM3N/YFH52Tfr7Cdd7Ogl19SmvLZmGKQf3DDhP1Za3c3C+yDpvT3WcCAMxxMUrYZIk3xJPSaaI1jLHRbbB+35/3mxpIRNTXJM3UMmHISQlGXLaTg1qd9SRGYQT9tUvMvnjJYawlAA3wbM2Hh10G8agCYshr9dKDE0YwFyZul2F3PDCJqW1qapLOu+4QOAh939cCgvPta+RAeh6mExJ5axahEezyth8MCB2tlK0Hh3LRyAEOxzKT626B7NLdl3yneK0APJr/IuVbzKMYy72X7AfTIJ9SVpvz8ytQ8v+Dy1noIuq9EM6yYQRDh5gVKRdc3W8+SI7zvY40BogXeD3qUXYULOsv9fx3bx7ZCdMG5Eg23QB0ewX1YS9yH5E2UgfXij+cNKJskVkuRyNujTyf+puF6pOxS+piPnrUKKdiuRLpSeb33NyYM68K7WPD3W8ykKQYQJc9Uyt3pud48QeI8IhmifUrznCwXE88GzQZpruOW4P0+5N85xAv6199MibN/Fqxpus6Bfxi7IWPtSiEBIe9CCfZ00nHFPJelMggdJN7PjUWeFHlzOjilLPjxIKO5ppmrZxnrbxuc4TnCOxWzN5i1Jus= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(110136005)(316002)(70586007)(70206006)(478600001)(6666004)(7696005)(55016003)(5660300002)(82310400005)(8936002)(8676002)(41300700001)(30864003)(2906002)(40480700001)(34020700004)(82740400003)(86362001)(7636003)(356005)(426003)(2616005)(336012)(16526019)(26005)(6286002)(1076003)(40460700003)(186003)(36860700001)(47076005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:27.7198 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0c747d45-3ebc-416a-d070-08db3f25d25d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7821 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add two dr_actions to implement IPv6 routing extension push and pop, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 41 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 380 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 5 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + 5 files changed, 428 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ed3d5efbb7..241485f905 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3438,6 +3438,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2b02884dc3..da058bdb4b 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -45,6 +45,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_PUSH_VLAN, MLX5DR_ACTION_TYP_ASO_METER, MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, MLX5DR_ACTION_TYP_MAX, }; @@ -186,6 +188,12 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *data; + uint8_t *mhdr; + } recom; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -614,4 +622,37 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); +/* Check if mlx5dr action template contain srv6 push or pop actions. + * + * @param[in] at + * The action template going to be parsed. + * @return true if containing srv6 push/pop action, false otherwise. + */ +bool +mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at); + +/* Create multiple direct actions combination action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action. + * @param[in] data_sz + * Size in bytes of data. + * @param[in] inline_data + * Header data array in case of inline action. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 2d93be717f..fa38654644 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -19,6 +19,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ [MLX5DR_TABLE_TYPE_NIC_RX] = { BIT(MLX5DR_ACTION_TYP_TAG), BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -29,6 +30,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -46,6 +48,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -54,6 +57,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ }, [MLX5DR_TABLE_TYPE_FDB] = { BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -64,6 +68,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -227,6 +232,18 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); } +bool mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at) +{ + int i = 0; + + for (i = 0; i < at->num_actions; i++) { + if (at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP || + at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) + return true; + } + return false; +} + static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) { DR_LOG(ERR, "Invalid action_type sequence"); @@ -501,6 +518,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->dest_tir_num = obj->id; break; case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; if (action->modify_header.num_of_actions == 1) { @@ -529,10 +547,14 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; break; case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; attr->insert_header.encap = 1; - attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + if (action->type == MLX5DR_ACTION_TYP_L2_TO_TNL_L2) + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + else + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; attr->insert_header.arg_id = action->reformat.arg_obj->id; attr->insert_header.header_size = action->reformat.header_size; break; @@ -1452,6 +1474,90 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_handle_ipv6_routing_pop(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + void *dev = flow_hw_get_dev_from_ctx(ctx); + int mh_data_size, ret; + uint8_t *srv6_data; + uint8_t anchor_id; + + if (dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for IPv6 routing pop\n"); + return -1; + } + ret = flow_dv_ipv6_routing_pop_mhdr_cmd(dev, mh_data, &anchor_id); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate modify-header pattern for IPv6 routing pop\n"); + return -1; + } + srv6_data = mh_data + MLX5DR_MODIFY_ACTION_SIZE * ret; + /* Remove SRv6 headers */ + MLX5_SET(stc_ste_param_remove, srv6_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, srv6_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_start_anchor, anchor_id); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_TCP_UDP); + mh_data_size = (ret + 1) * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, 0); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for IPv6 routing pop\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + mh_data, mh_data_size); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg IPv6 routing pop"); + goto clean_stc; + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int mlx5dr_action_handle_ipv6_routing_push(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create args for ipv6 routing push"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for ipv6 routing push"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + static int mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, size_t data_sz, @@ -1484,6 +1590,78 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_create_push_pop_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = mlx5dr_action_handle_ipv6_routing_pop(ctx, action); + break; + + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + *((uint8_t *)data) = 0; + ret = mlx5dr_action_handle_ipv6_routing_push(ctx, data_sz, data, + bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +static struct mlx5dr_action * +mlx5dr_action_create_push_pop(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "IPv6 routing push/pop is not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Push/pop flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_push_pop_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create push/pop HWS.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + struct mlx5dr_action * mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_reformat_type reformat_type, @@ -1540,6 +1718,169 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, return NULL; } +static int +mlx5dr_action_create_recom_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + void *eth_dev = flow_hw_get_dev_from_ctx(ctx); + struct mlx5dr_action *sub_action; + int ret; + + if (eth_dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for recombination action"); + rte_errno = EINVAL; + return rte_errno; + } + memset(cmd, 0, sizeof(cmd)); + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = flow_dv_generate_ipv6_routing_pop_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action1 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action1"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action1 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_pop_mhdr2(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action2 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action2"); + goto err; + } + action->recom.action2 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action3"); + goto err; + } + action->recom.action3 = sub_action; + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + ret = flow_dv_generate_ipv6_routing_push_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action2 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags | MLX5DR_ACTION_FLAG_SHARED); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action2"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action2 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_push_mhdr2(eth_dev, NULL, cmd, + MLX5_MHDR_MAX_CMD, data); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action3 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action3"); + goto err; + } + action->recom.action3 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action1"); + goto err; + } + action->recom.action1 = sub_action; + break; + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return 0; + +err: + if (action->recom.action1) + mlx5dr_action_destroy(action->recom.action1); + if (action->recom.action2) + mlx5dr_action_destroy(action->recom.action2); + if (action->recom.action3) + mlx5dr_action_destroy(action->recom.action3); + rte_errno = EINVAL; + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Recom flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + if (action_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP && log_bulk_size) { + DR_LOG(ERR, "IPv6 POP must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_recom_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create recombination.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static int mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, size_t actions_sz, @@ -1677,6 +2018,43 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(action); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action1->ctx, + action->recom.action1); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action2->ctx, + action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_cmd_destroy_obj(action->recom.action1->reformat.arg_obj); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 17619c0057..cb51f81da1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -130,6 +130,11 @@ struct mlx5dr_action { struct mlx5dr_devx_obj *arg_obj; uint32_t header_size; } reformat; + struct { + struct mlx5dr_action *action1; + struct mlx5dr_action *action2; + struct mlx5dr_action *action3; + } recom; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index b8049a173d..1a6ad4dd71 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -22,6 +22,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_POP] = "POP_IPV6_ROUTING", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH] = "PUSH_IPV6_ROUTING", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, -- 2.27.0