From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45B4E45BA3; Tue, 22 Oct 2024 18:59:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4C9E40ED6; Tue, 22 Oct 2024 18:56:46 +0200 (CEST) Received: from egress-ip11b.ess.de.barracuda.com (egress-ip11b.ess.de.barracuda.com [18.185.115.215]) by mails.dpdk.org (Postfix) with ESMTP id 7FAA740E09 for ; Tue, 22 Oct 2024 18:56:32 +0200 (CEST) Received: from EUR02-AM0-obe.outbound.protection.outlook.com (mail-am0eur02lp2237.outbound.protection.outlook.com [104.47.11.237]) by mx-outbound19-119.eu-central-1b.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 22 Oct 2024 16:56:31 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=qCalP82JhyDy2ss1V351/DbZGzNW5BQoLKZ9Oc2gTvVGfscVQ1ZQAX0SLJ7tPiOO7kBSUPC7ZHIeEoKmDFn6Afzx8aP1KVOyfYrJdfOXtuXZ8lEwBGektZCGqNbAw2EhFq1CdcOonuEIPgR2vxDPqtOYzf3pobKDQle6sZUVAqg7AeZnODADemw3/z7ctr+9f4LQjTPt1W0eRPLRasU+Q0ym7BB2xsZkQuqSm7DnwmDnY3MKSldffxOGtw1wVpOHTaGz19K6uv0e+Jt7eZ+m9bl73kXzpOFcg0dqKdPDpzbxAlZDdX8OelfBgvPepBG/nUqjkl+DmylPN6KMV2uwRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4VxH6t74EQWhPAj15XAkgXdSyZeKPz9mNLaspWzUl4E=; b=gVNKwn62uMdGtOUKWkzyRAJ2td7fp90swUAW2y0TX7gxkVZlTopOiJfiNxjpVxdGdPW4hUz7cERZOzhhmg/W+d/HnWPRzt3TP7FQHFV+pHmugRAfweVv4jSeznE5fzcSBTRqhxOHr8Q37QqTWs41fK3BVi8Jjh2OUjJs1ujWSeY9BiNQNR0wPPpIxKoVLfEz8paP5ZBOo6+e5dGJqgGuUFck8iWeKMtsGi7YhF9JB9+0kCVqaB+0/JHPcgs9tfC+xTnaGstMVPXVgyCi+iUPjt56aDGO5eATlg5Wmn448Wk8+8sZc/1zB5HlOu2pJXPKBBmFupHvnaB6DVu6nhAeKA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4VxH6t74EQWhPAj15XAkgXdSyZeKPz9mNLaspWzUl4E=; b=SyIveJhUudFW/NrVNBD2QmD1ZuWukESRsoTb6UqR15uToJ12AqNHwXjy85ivSakeJrBU+aUFPbYX+nKE/80+brPa4XBfR0IHaRWWqdtLmKiz1FhmaJvj6dH/1H1f2218SBnKzOgHG2px1k6r9sKxbfBEj5hi9n9ftG1qvAIN2vE= Received: from DB9PR06CA0024.eurprd06.prod.outlook.com (2603:10a6:10:1db::29) by VI2P190MB2123.EURP190.PROD.OUTLOOK.COM (2603:10a6:800:22d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.16; Tue, 22 Oct 2024 16:56:28 +0000 Received: from DU2PEPF0001E9C2.eurprd03.prod.outlook.com (2603:10a6:10:1db:cafe::1c) by DB9PR06CA0024.outlook.office365.com (2603:10a6:10:1db::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8069.29 via Frontend Transport; Tue, 22 Oct 2024 16:56:27 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DU2PEPF0001E9C2.mail.protection.outlook.com (10.167.8.71) with Microsoft SMTP Server id 15.20.8093.14 via Frontend Transport; Tue, 22 Oct 2024 16:56:27 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, stephen@networkplumber.org, Danylo Vodopianov Subject: [PATCH v2 25/73] net/ntnic: add items gtp and actions raw encap/decap Date: Tue, 22 Oct 2024 18:54:42 +0200 Message-ID: <20241022165541.3186140-26-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241022165541.3186140-1-sil-plv@napatech.com> References: <20241021210527.2075431-1-sil-plv@napatech.com> <20241022165541.3186140-1-sil-plv@napatech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DU2PEPF0001E9C2:EE_|VI2P190MB2123:EE_ Content-Type: text/plain X-MS-Office365-Filtering-Correlation-Id: dd67a68e-27d8-4dcc-86a5-08dcf2ba785d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?DFTyEWOuWc4EkfdXiAojfzqayWq//BnncTFFrxz5qgLZwh14m7xMs4qgnuVv?= =?us-ascii?Q?TrCsISbAhoqYzdf/y2mwA1afN1xPiSzNS6tuo48J+OML5iw16wZ7G2RusKYM?= =?us-ascii?Q?Ac/YjxB7D6mhp1OBp35FktNxPkNkXgqusRzgRKjFwEC+GlYDu571DCG0TjUm?= =?us-ascii?Q?ZbElUtwpmf4Z00VZdyF76kd7wUE/Lig7f0mPuKb707Q2vvA3d8wgLicpMdLl?= =?us-ascii?Q?uLyZ2899+mu14d0zhaYIVBIxxV03vGR/uv5ccJAKwQXQJjkIgcPT5ChwVFFW?= =?us-ascii?Q?XnvN/+q4wyjHZiPDaEI4Vd3hTjbJ9z5ih10DGcYZ+4dYSwPfaREok7KyJnDV?= =?us-ascii?Q?290SlYKvHcRyVYb8rivYSoI5/JW6woaMRT3yeU11WHC1mJYVPl5UC91Upyme?= =?us-ascii?Q?FAtuL7CZFi7LoryNU47c3D6k4/m+bGBqle7xghAT/1xN7LHfoLikkI08P5Vu?= =?us-ascii?Q?81pWOoYpOj6p2rnRF17CMxatd48rGWK+1rNtOYU3o+sh8V2J9A0IWVR19dug?= =?us-ascii?Q?tKAIsb+TST//4hY/DiaUxcmK4FFBf4fja9Td5kdNUXluDxj5k9zOs0dp+yLj?= =?us-ascii?Q?e5zhAKyONy2Sy8uuH5z3wlHTjU5XCMbouYfiln89hWlAAbIJieQJDaOy+zcI?= =?us-ascii?Q?NLXbPP/VUjVsz3+sB1UbF8PFs2edlp67voQxSKkL6Xw1Dwx87joDiWLM8SIS?= =?us-ascii?Q?HIOPi1729rB6jmF2+xReuOdXzqkUEUoyvYcI5Qgc3jStiv+3dnFAQADM1XGF?= =?us-ascii?Q?W/MbgYor1E0CZIqZ4VuNnJbxibZBd8AOSv0hB1oGxwBIZelrRFzvmfJ18zAX?= =?us-ascii?Q?DKnqlj5dXkN8uMDfd/3uhQwtBM1fXzryZzGQuI+9I5ZKyNRWODIIPQMYd2w9?= =?us-ascii?Q?DT78kIR4Ox49O8+7g/bIGZr9xIdstpTyIAabUEG+po3zQ5MCVIsWE8mBVVes?= =?us-ascii?Q?nClsU+SbhgLjyfWm9g07vqRz7SaQNJPSNYaUxLFldayEdqrSjfLtkKVJpMIz?= =?us-ascii?Q?xjdV4s9AlCnBxkVW4FDKVOSdqUTEaOC2RXvMmcD412oP0gkWVxNfmlIzUjBa?= =?us-ascii?Q?aA53kmIxJEwVWlVjUjkkLHoEvfxn4hWPy08ebXds/sM8v2kUXLldIhB6iQkk?= =?us-ascii?Q?9+0/oXtCNQ+vkV6P7puJLMjtciRbaYXahKoJzVxIPXfpDN8gCy4mpFMuxcyL?= =?us-ascii?Q?OFbjQh4sta6R3nwkpxEgnWnGz1IHMEPjZWB/WnCPuh++60QVyPB6zp59LV9X?= =?us-ascii?Q?wh2qLaChQRLsG7P6V2mQDyuM8hnlVtyM/ePUr3SDLmapidbyEz88r68zw2l2?= =?us-ascii?Q?p2U6//+MW3Xw9kBaU5HROJqBWUANoKjHeTo+G9oJmV+xQ2FYx+ko271ypy+i?= =?us-ascii?Q?wdI8eycZhxNyzy9U2elQkkdzuV9EobBLRG+RMGR4SEvEQkQowA=3D=3D?= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(376014)(1800799024)(82310400026)(36860700013); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: n8xtd6on3cydLLF0bGH6aJ5CbPZznFfO0V/Md33OG3a4z2vPO4lhyG3p8qqcO0Jc/Rh8PlJYCOveqkZNBEG7yWqvnHV71K2vL8bmJa8xg5WfBysOXgQlxc55s3d+dJ/eA2KqUuBrmZCSBECY2CzgimqLsrdXEmyxpsp9yoa7j3L/GSW1Y7DQZx3eCMIduOcpVBzUUDZDXyb5WkWhqVH/hbeYKqjg41848tRdEwR1ADQn+P8uIjGwn2BQFtu2KN07GG9Hq8wVuAsDFQMbgejapX8gZqzn1SK0viq491kM979jCLCHRKGxQKGxMXjuXvE8aMHUysyfhvTvU9+K5+uNrrTcG+WtVnOBhEmesP6XWcjXSqzUrQ9/EDm/kFdye9qjCkub0q1PlArR54iOkagfwoRyXngbTlIBP+B3GlM6SC3voNtbgaWKkRb65RWRlvLUN6yrm91gU6/DMe4cgFmjcVAwlZOVu2YzhktIgBe5CcYBoNI7aVTmaadUi0T4ei5hQ99oN36BtzNunijwkuZtR4En+4cD8ZKBl7FMDnhWDJgV7895YdnOkqAlcO1FrCRipy1YGBsUBBu2P68/k/z9/KUW9JtIsCQQR2RSFePBfecyTOvXm2WGXzCuPvXzt72RmmoV9MjDS7CUVfFgepFPZKSpaOgYnGdNzyNNNVAvdKw= X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2024 16:56:27.4033 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dd67a68e-27d8-4dcc-86a5-08dcf2ba785d X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DU2PEPF0001E9C2.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI2P190MB2123 X-BESS-ID: 1729616191-304983-12653-14375-1 X-BESS-VER: 2019.1_20241018.1852 X-BESS-Apparent-Source-IP: 104.47.11.237 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVkYmBoYWQGYGUNTS2MjIwtjQwM ggySQlzdIg1cTQ0CjZ2MI0Oc3SLNXYQKk2FgCPZGpmQgAAAA== X-BESS-Outbound-Spam-Score: 0.50 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259902 [from cloudscan10-0.eu-central-1a.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.50 BSF_RULE7568M META: Custom Rule 7568M 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.50 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_RULE7568M, BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Add possibility to use * RTE_FLOW_ITEM_TYPE_GTP * RTE_FLOW_ITEM_TYPE_GTP_PSC * RTE_FLOW_ACTION_TYPE_RAW_ENCAP * RTE_FLOW_ACTION_TYPE_RAW_DECAP Signed-off-by: Danylo Vodopianov --- doc/guides/nics/features/ntnic.ini | 4 + drivers/net/ntnic/include/create_elements.h | 4 + drivers/net/ntnic/include/flow_api_engine.h | 40 ++ drivers/net/ntnic/include/hw_mod_backend.h | 4 + .../ntnic/include/stream_binary_flow_api.h | 22 ++ .../profile_inline/flow_api_profile_inline.c | 366 +++++++++++++++++- drivers/net/ntnic/ntnic_filter/ntnic_filter.c | 278 ++++++++++++- 7 files changed, 713 insertions(+), 5 deletions(-) diff --git a/doc/guides/nics/features/ntnic.ini b/doc/guides/nics/features/ntnic.ini index 4201c8e8b9..4cb9509742 100644 --- a/doc/guides/nics/features/ntnic.ini +++ b/doc/guides/nics/features/ntnic.ini @@ -16,6 +16,8 @@ x86-64 = Y [rte_flow items] any = Y eth = Y +gtp = Y +gtp_psc = Y icmp = Y icmp6 = Y ipv4 = Y @@ -33,3 +35,5 @@ mark = Y modify_field = Y port_id = Y queue = Y +raw_decap = Y +raw_encap = Y diff --git a/drivers/net/ntnic/include/create_elements.h b/drivers/net/ntnic/include/create_elements.h index 179542d2b2..70e6cad195 100644 --- a/drivers/net/ntnic/include/create_elements.h +++ b/drivers/net/ntnic/include/create_elements.h @@ -27,6 +27,8 @@ struct cnv_attr_s { struct cnv_action_s { struct rte_flow_action flow_actions[MAX_ACTIONS]; + struct flow_action_raw_encap encap; + struct flow_action_raw_decap decap; struct rte_flow_action_queue queue; }; @@ -52,6 +54,8 @@ enum nt_rte_flow_item_type { }; extern rte_spinlock_t flow_lock; + +int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out); int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error); int create_attr(struct cnv_attr_s *attribute, const struct rte_flow_attr *attr); int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item items[], diff --git a/drivers/net/ntnic/include/flow_api_engine.h b/drivers/net/ntnic/include/flow_api_engine.h index f6557d0d20..b1d39b919b 100644 --- a/drivers/net/ntnic/include/flow_api_engine.h +++ b/drivers/net/ntnic/include/flow_api_engine.h @@ -56,6 +56,29 @@ enum res_type_e { #define MAX_MATCH_FIELDS 16 +/* + * Tunnel encapsulation header definition + */ +#define MAX_TUN_HDR_SIZE 128 +struct tunnel_header_s { + union { + uint8_t hdr8[MAX_TUN_HDR_SIZE]; + uint32_t hdr32[(MAX_TUN_HDR_SIZE + 3) / 4]; + } d; + uint32_t user_port_id; + uint8_t len; + + uint8_t nb_vlans; + + uint8_t ip_version; /* 4: v4, 6: v6 */ + uint16_t ip_csum_precalc; + + uint8_t new_outer; + uint8_t l2_len; + uint8_t l3_len; + uint8_t l4_len; +}; + struct match_elem_s { int masked_for_tcam; /* if potentially selected for TCAM */ uint32_t e_word[4]; @@ -124,6 +147,23 @@ struct nic_flow_def { int full_offload; + /* + * Action push tunnel + */ + struct tunnel_header_s tun_hdr; + + /* + * If DPDK RTE tunnel helper API used + * this holds the tunnel if used in flow + */ + struct tunnel_s *tnl; + + /* + * Header Stripper + */ + int header_strip_end_dyn; + int header_strip_end_ofs; + /* * Modify field */ diff --git a/drivers/net/ntnic/include/hw_mod_backend.h b/drivers/net/ntnic/include/hw_mod_backend.h index 6a8a38636f..1b45ea4296 100644 --- a/drivers/net/ntnic/include/hw_mod_backend.h +++ b/drivers/net/ntnic/include/hw_mod_backend.h @@ -175,6 +175,10 @@ enum { PROT_L4_ICMP = 4 }; +enum { + PROT_TUN_GTPV1U = 6, +}; + enum { PROT_TUN_L3_OTHER = 0, PROT_TUN_L3_IPV4 = 1, diff --git a/drivers/net/ntnic/include/stream_binary_flow_api.h b/drivers/net/ntnic/include/stream_binary_flow_api.h index d878b848c2..8097518d61 100644 --- a/drivers/net/ntnic/include/stream_binary_flow_api.h +++ b/drivers/net/ntnic/include/stream_binary_flow_api.h @@ -18,6 +18,7 @@ #define FLOW_MAX_QUEUES 128 +#define RAW_ENCAP_DECAP_ELEMS_MAX 16 /* * Flow eth dev profile determines how the FPGA module resources are * managed and what features are available @@ -31,6 +32,27 @@ struct flow_queue_id_s { int hw_id; }; +/* + * RTE_FLOW_ACTION_TYPE_RAW_ENCAP + */ +struct flow_action_raw_encap { + uint8_t *data; + uint8_t *preserve; + size_t size; + struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX]; + int item_count; +}; + +/* + * RTE_FLOW_ACTION_TYPE_RAW_DECAP + */ +struct flow_action_raw_decap { + uint8_t *data; + size_t size; + struct rte_flow_item items[RAW_ENCAP_DECAP_ELEMS_MAX]; + int item_count; +}; + struct flow_eth_dev; /* port device */ struct flow_handle; diff --git a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c index 2cda2e8b14..9fc4908975 100644 --- a/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c +++ b/drivers/net/ntnic/nthw/flow_api/profile_inline/flow_api_profile_inline.c @@ -463,6 +463,202 @@ static int interpret_flow_actions(const struct flow_eth_dev *dev, break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_ENCAP", dev); + + if (action[aidx].conf) { + const struct flow_action_raw_encap *encap = + (const struct flow_action_raw_encap *)action[aidx].conf; + const struct flow_action_raw_encap *encap_mask = action_mask + ? (const struct flow_action_raw_encap *)action_mask[aidx] + .conf + : NULL; + const struct rte_flow_item *items = encap->items; + + if (encap_decap_order != 1) { + NT_LOG(ERR, FILTER, + "ERROR: - RAW_ENCAP must follow RAW_DECAP."); + flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error); + return -1; + } + + if (encap->size == 0 || encap->size > 255 || + encap->item_count < 2) { + NT_LOG(ERR, FILTER, + "ERROR: - RAW_ENCAP data/size invalid."); + flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error); + return -1; + } + + encap_decap_order = 2; + + fd->tun_hdr.len = (uint8_t)encap->size; + + if (encap_mask) { + memcpy_mask_if(fd->tun_hdr.d.hdr8, encap->data, + encap_mask->data, fd->tun_hdr.len); + + } else { + memcpy(fd->tun_hdr.d.hdr8, encap->data, fd->tun_hdr.len); + } + + while (items->type != RTE_FLOW_ITEM_TYPE_END) { + switch (items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + fd->tun_hdr.l2_len = 14; + break; + + case RTE_FLOW_ITEM_TYPE_VLAN: + fd->tun_hdr.nb_vlans += 1; + fd->tun_hdr.l2_len += 4; + break; + + case RTE_FLOW_ITEM_TYPE_IPV4: + fd->tun_hdr.ip_version = 4; + fd->tun_hdr.l3_len = sizeof(struct rte_ipv4_hdr); + fd->tun_hdr.new_outer = 1; + + /* Patch length */ + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 2] = 0x07; + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 3] = 0xfd; + break; + + case RTE_FLOW_ITEM_TYPE_IPV6: + fd->tun_hdr.ip_version = 6; + fd->tun_hdr.l3_len = sizeof(struct rte_ipv6_hdr); + fd->tun_hdr.new_outer = 1; + + /* Patch length */ + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 4] = 0x07; + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + 5] = 0xfd; + break; + + case RTE_FLOW_ITEM_TYPE_SCTP: + fd->tun_hdr.l4_len = sizeof(struct rte_sctp_hdr); + break; + + case RTE_FLOW_ITEM_TYPE_TCP: + fd->tun_hdr.l4_len = sizeof(struct rte_tcp_hdr); + break; + + case RTE_FLOW_ITEM_TYPE_UDP: + fd->tun_hdr.l4_len = sizeof(struct rte_udp_hdr); + + /* Patch length */ + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + + fd->tun_hdr.l3_len + 4] = 0x07; + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + + fd->tun_hdr.l3_len + 5] = 0xfd; + break; + + case RTE_FLOW_ITEM_TYPE_ICMP: + fd->tun_hdr.l4_len = sizeof(struct rte_icmp_hdr); + break; + + case RTE_FLOW_ITEM_TYPE_ICMP6: + fd->tun_hdr.l4_len = + sizeof(struct rte_flow_item_icmp6); + break; + + case RTE_FLOW_ITEM_TYPE_GTP: + /* Patch length */ + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + + fd->tun_hdr.l3_len + + fd->tun_hdr.l4_len + 2] = 0x07; + fd->tun_hdr.d.hdr8[fd->tun_hdr.l2_len + + fd->tun_hdr.l3_len + + fd->tun_hdr.l4_len + 3] = 0xfd; + break; + + default: + break; + } + + items++; + } + + if (fd->tun_hdr.nb_vlans > 3) { + NT_LOG(ERR, FILTER, + "ERROR: - Encapsulation with %d vlans not supported.", + (int)fd->tun_hdr.nb_vlans); + flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error); + return -1; + } + + /* Convert encap data to 128-bit little endian */ + for (size_t i = 0; i < (encap->size + 15) / 16; ++i) { + uint8_t *data = fd->tun_hdr.d.hdr8 + i * 16; + + for (unsigned int j = 0; j < 8; ++j) { + uint8_t t = data[j]; + data[j] = data[15 - j]; + data[15 - j] = t; + } + } + } + + break; + + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_RAW_DECAP", dev); + + if (action[aidx].conf) { + /* Mask is N/A for RAW_DECAP */ + const struct flow_action_raw_decap *decap = + (const struct flow_action_raw_decap *)action[aidx].conf; + + if (encap_decap_order != 0) { + NT_LOG(ERR, FILTER, + "ERROR: - RAW_ENCAP must follow RAW_DECAP."); + flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error); + return -1; + } + + if (decap->item_count < 2) { + NT_LOG(ERR, FILTER, + "ERROR: - RAW_DECAP must decap something."); + flow_nic_set_error(ERR_ACTION_UNSUPPORTED, error); + return -1; + } + + encap_decap_order = 1; + + switch (decap->items[decap->item_count - 2].type) { + case RTE_FLOW_ITEM_TYPE_ETH: + case RTE_FLOW_ITEM_TYPE_VLAN: + fd->header_strip_end_dyn = DYN_L3; + fd->header_strip_end_ofs = 0; + break; + + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + fd->header_strip_end_dyn = DYN_L4; + fd->header_strip_end_ofs = 0; + break; + + case RTE_FLOW_ITEM_TYPE_SCTP: + case RTE_FLOW_ITEM_TYPE_TCP: + case RTE_FLOW_ITEM_TYPE_UDP: + case RTE_FLOW_ITEM_TYPE_ICMP: + case RTE_FLOW_ITEM_TYPE_ICMP6: + fd->header_strip_end_dyn = DYN_L4_PAYLOAD; + fd->header_strip_end_ofs = 0; + break; + + case RTE_FLOW_ITEM_TYPE_GTP: + fd->header_strip_end_dyn = DYN_TUN_L3; + fd->header_strip_end_ofs = 0; + break; + + default: + fd->header_strip_end_dyn = DYN_L2; + fd->header_strip_end_ofs = 0; + break; + } + } + + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: NT_LOG(DBG, FILTER, "Dev:%p: RTE_FLOW_ACTION_TYPE_MODIFY_FIELD", dev); { @@ -1766,6 +1962,174 @@ static int interpret_flow_elements(const struct flow_eth_dev *dev, break; + case RTE_FLOW_ITEM_TYPE_GTP: + NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP", + dev->ndev->adapter_no, dev->port); + { + const struct rte_gtp_hdr *gtp_spec = + (const struct rte_gtp_hdr *)elem[eidx].spec; + const struct rte_gtp_hdr *gtp_mask = + (const struct rte_gtp_hdr *)elem[eidx].mask; + + if (gtp_spec == NULL || gtp_mask == NULL) { + fd->tunnel_prot = PROT_TUN_GTPV1U; + break; + } + + if (gtp_mask->gtp_hdr_info != 0 || + gtp_mask->msg_type != 0 || gtp_mask->plen != 0) { + NT_LOG(ERR, FILTER, + "Requested GTP field not support by running SW version"); + flow_nic_set_error(ERR_FAILED, error); + return -1; + } + + if (gtp_mask->teid) { + if (sw_counter < 2) { + uint32_t *sw_data = + &packet_data[1 - sw_counter]; + uint32_t *sw_mask = + &packet_mask[1 - sw_counter]; + + sw_mask[0] = ntohl(gtp_mask->teid); + sw_data[0] = + ntohl(gtp_spec->teid) & sw_mask[0]; + + km_add_match_elem(&fd->km, &sw_data[0], + &sw_mask[0], 1, + DYN_L4_PAYLOAD, 4); + set_key_def_sw(key_def, sw_counter, + DYN_L4_PAYLOAD, 4); + sw_counter += 1; + + } else if (qw_counter < 2 && qw_free > 0) { + uint32_t *qw_data = + &packet_data[2 + 4 - + qw_counter * 4]; + uint32_t *qw_mask = + &packet_mask[2 + 4 - + qw_counter * 4]; + + qw_data[0] = ntohl(gtp_spec->teid); + qw_data[1] = 0; + qw_data[2] = 0; + qw_data[3] = 0; + + qw_mask[0] = ntohl(gtp_mask->teid); + qw_mask[1] = 0; + qw_mask[2] = 0; + qw_mask[3] = 0; + + qw_data[0] &= qw_mask[0]; + qw_data[1] &= qw_mask[1]; + qw_data[2] &= qw_mask[2]; + qw_data[3] &= qw_mask[3]; + + km_add_match_elem(&fd->km, &qw_data[0], + &qw_mask[0], 4, + DYN_L4_PAYLOAD, 4); + set_key_def_qw(key_def, qw_counter, + DYN_L4_PAYLOAD, 4); + qw_counter += 1; + qw_free -= 1; + + } else { + NT_LOG(ERR, FILTER, + "Key size too big. Out of SW-QW resources."); + flow_nic_set_error(ERR_FAILED, error); + return -1; + } + } + + fd->tunnel_prot = PROT_TUN_GTPV1U; + } + + break; + + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_GTP_PSC", + dev->ndev->adapter_no, dev->port); + { + const struct rte_gtp_psc_generic_hdr *gtp_psc_spec = + (const struct rte_gtp_psc_generic_hdr *)elem[eidx].spec; + const struct rte_gtp_psc_generic_hdr *gtp_psc_mask = + (const struct rte_gtp_psc_generic_hdr *)elem[eidx].mask; + + if (gtp_psc_spec == NULL || gtp_psc_mask == NULL) { + fd->tunnel_prot = PROT_TUN_GTPV1U; + break; + } + + if (gtp_psc_mask->type != 0 || + gtp_psc_mask->ext_hdr_len != 0) { + NT_LOG(ERR, FILTER, + "Requested GTP PSC field is not supported by running SW version"); + flow_nic_set_error(ERR_FAILED, error); + return -1; + } + + if (gtp_psc_mask->qfi) { + if (sw_counter < 2) { + uint32_t *sw_data = + &packet_data[1 - sw_counter]; + uint32_t *sw_mask = + &packet_mask[1 - sw_counter]; + + sw_mask[0] = ntohl(gtp_psc_mask->qfi); + sw_data[0] = ntohl(gtp_psc_spec->qfi) & + sw_mask[0]; + + km_add_match_elem(&fd->km, &sw_data[0], + &sw_mask[0], 1, + DYN_L4_PAYLOAD, 14); + set_key_def_sw(key_def, sw_counter, + DYN_L4_PAYLOAD, 14); + sw_counter += 1; + + } else if (qw_counter < 2 && qw_free > 0) { + uint32_t *qw_data = + &packet_data[2 + 4 - + qw_counter * 4]; + uint32_t *qw_mask = + &packet_mask[2 + 4 - + qw_counter * 4]; + + qw_data[0] = ntohl(gtp_psc_spec->qfi); + qw_data[1] = 0; + qw_data[2] = 0; + qw_data[3] = 0; + + qw_mask[0] = ntohl(gtp_psc_mask->qfi); + qw_mask[1] = 0; + qw_mask[2] = 0; + qw_mask[3] = 0; + + qw_data[0] &= qw_mask[0]; + qw_data[1] &= qw_mask[1]; + qw_data[2] &= qw_mask[2]; + qw_data[3] &= qw_mask[3]; + + km_add_match_elem(&fd->km, &qw_data[0], + &qw_mask[0], 4, + DYN_L4_PAYLOAD, 14); + set_key_def_qw(key_def, qw_counter, + DYN_L4_PAYLOAD, 14); + qw_counter += 1; + qw_free -= 1; + + } else { + NT_LOG(ERR, FILTER, + "Key size too big. Out of SW-QW resources."); + flow_nic_set_error(ERR_FAILED, error); + return -1; + } + } + + fd->tunnel_prot = PROT_TUN_GTPV1U; + } + + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: NT_LOG(DBG, FILTER, "Adap %i, Port %i: RTE_FLOW_ITEM_TYPE_PORT_ID", dev->ndev->adapter_no, dev->port); @@ -1929,7 +2293,7 @@ static struct flow_handle *create_flow_filter(struct flow_eth_dev *dev, struct n uint16_t forced_vlan_vid __rte_unused, uint16_t caller_id, struct rte_flow_error *error, uint32_t port_id, uint32_t num_dest_port __rte_unused, uint32_t num_queues __rte_unused, - uint32_t *packet_data __rte_unused, uint32_t *packet_mask __rte_unused, + uint32_t *packet_data, uint32_t *packet_mask __rte_unused, struct flm_flow_key_def_s *key_def __rte_unused) { struct flow_handle *fh = calloc(1, sizeof(struct flow_handle)); diff --git a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c index b9d723c9dd..df391b6399 100644 --- a/drivers/net/ntnic/ntnic_filter/ntnic_filter.c +++ b/drivers/net/ntnic/ntnic_filter/ntnic_filter.c @@ -16,6 +16,211 @@ rte_spinlock_t flow_lock = RTE_SPINLOCK_INITIALIZER; static struct rte_flow nt_flows[MAX_RTE_FLOWS]; +int interpret_raw_data(uint8_t *data, uint8_t *preserve, int size, struct rte_flow_item *out) +{ + int hdri = 0; + int pkti = 0; + + /* Ethernet */ + if (size - pkti == 0) + goto interpret_end; + + if (size - pkti < (int)sizeof(struct rte_ether_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_ETH; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + rte_be16_t ether_type = ((struct rte_ether_hdr *)&data[pkti])->ether_type; + + hdri += 1; + pkti += sizeof(struct rte_ether_hdr); + + if (size - pkti == 0) + goto interpret_end; + + /* VLAN */ + while (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) || + ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ) || + ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ1)) { + if (size - pkti == 0) + goto interpret_end; + + if (size - pkti < (int)sizeof(struct rte_vlan_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_VLAN; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + ether_type = ((struct rte_vlan_hdr *)&data[pkti])->eth_proto; + + hdri += 1; + pkti += sizeof(struct rte_vlan_hdr); + } + + if (size - pkti == 0) + goto interpret_end; + + /* Layer 3 */ + uint8_t next_header = 0; + + if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) && (data[pkti] & 0xF0) == 0x40) { + if (size - pkti < (int)sizeof(struct rte_ipv4_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_IPV4; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + next_header = data[pkti + 9]; + + hdri += 1; + pkti += sizeof(struct rte_ipv4_hdr); + + } else { + return -1; + } + + if (size - pkti == 0) + goto interpret_end; + + /* Layer 4 */ + int gtpu_encap = 0; + + if (next_header == 1) { /* ICMP */ + if (size - pkti < (int)sizeof(struct rte_icmp_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + hdri += 1; + pkti += sizeof(struct rte_icmp_hdr); + + } else if (next_header == 58) { /* ICMP6 */ + if (size - pkti < (int)sizeof(struct rte_flow_item_icmp6)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_ICMP6; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + hdri += 1; + pkti += sizeof(struct rte_icmp_hdr); + + } else if (next_header == 6) { /* TCP */ + if (size - pkti < (int)sizeof(struct rte_tcp_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_TCP; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + hdri += 1; + pkti += sizeof(struct rte_tcp_hdr); + + } else if (next_header == 17) { /* UDP */ + if (size - pkti < (int)sizeof(struct rte_udp_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_UDP; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + gtpu_encap = ((struct rte_udp_hdr *)&data[pkti])->dst_port == + rte_cpu_to_be_16(RTE_GTPU_UDP_PORT); + + hdri += 1; + pkti += sizeof(struct rte_udp_hdr); + + } else if (next_header == 132) {/* SCTP */ + if (size - pkti < (int)sizeof(struct rte_sctp_hdr)) + return -1; + + out[hdri].type = RTE_FLOW_ITEM_TYPE_SCTP; + out[hdri].spec = &data[pkti]; + out[hdri].mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + hdri += 1; + pkti += sizeof(struct rte_sctp_hdr); + + } else { + return -1; + } + + if (size - pkti == 0) + goto interpret_end; + + /* GTPv1-U */ + if (gtpu_encap) { + if (size - pkti < (int)sizeof(struct rte_gtp_hdr)) + return -1; + + out[hdri] + .type = RTE_FLOW_ITEM_TYPE_GTP; + out[hdri] + .spec = &data[pkti]; + out[hdri] + .mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + int extension_present_bit = ((struct rte_gtp_hdr *)&data[pkti]) + ->e; + + hdri += 1; + pkti += sizeof(struct rte_gtp_hdr); + + if (extension_present_bit) { + if (size - pkti < (int)sizeof(struct rte_gtp_hdr_ext_word)) + return -1; + + out[hdri] + .type = RTE_FLOW_ITEM_TYPE_GTP; + out[hdri] + .spec = &data[pkti]; + out[hdri] + .mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + uint8_t next_ext = ((struct rte_gtp_hdr_ext_word *)&data[pkti]) + ->next_ext; + + hdri += 1; + pkti += sizeof(struct rte_gtp_hdr_ext_word); + + while (next_ext) { + size_t ext_len = data[pkti] * 4; + + if (size - pkti < (int)ext_len) + return -1; + + out[hdri] + .type = RTE_FLOW_ITEM_TYPE_GTP; + out[hdri] + .spec = &data[pkti]; + out[hdri] + .mask = (preserve != NULL) ? &preserve[pkti] : NULL; + + next_ext = data[pkti + ext_len - 1]; + + hdri += 1; + pkti += ext_len; + } + } + } + + if (size - pkti != 0) + return -1; + +interpret_end: + out[hdri].type = RTE_FLOW_ITEM_TYPE_END; + out[hdri].spec = NULL; + out[hdri].mask = NULL; + + return hdri + 1; +} + int convert_error(struct rte_flow_error *error, struct rte_flow_error *rte_flow_error) { if (error) { @@ -95,13 +300,78 @@ int create_match_elements(struct cnv_match_s *match, const struct rte_flow_item return (type >= 0) ? 0 : -1; } -int create_action_elements_inline(struct cnv_action_s *action __rte_unused, - const struct rte_flow_action actions[] __rte_unused, - int max_elem __rte_unused, - uint32_t queue_offset __rte_unused) +int create_action_elements_inline(struct cnv_action_s *action, + const struct rte_flow_action actions[], + int max_elem, + uint32_t queue_offset) { + int aidx = 0; int type = -1; + do { + type = actions[aidx].type; + if (type >= 0) { + action->flow_actions[aidx].type = type; + + /* + * Non-compatible actions handled here + */ + switch (type) { + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: { + const struct rte_flow_action_raw_decap *decap = + (const struct rte_flow_action_raw_decap *)actions[aidx] + .conf; + int item_count = interpret_raw_data(decap->data, NULL, decap->size, + action->decap.items); + + if (item_count < 0) + return item_count; + action->decap.data = decap->data; + action->decap.size = decap->size; + action->decap.item_count = item_count; + action->flow_actions[aidx].conf = &action->decap; + } + break; + + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: { + const struct rte_flow_action_raw_encap *encap = + (const struct rte_flow_action_raw_encap *)actions[aidx] + .conf; + int item_count = interpret_raw_data(encap->data, encap->preserve, + encap->size, action->encap.items); + + if (item_count < 0) + return item_count; + action->encap.data = encap->data; + action->encap.preserve = encap->preserve; + action->encap.size = encap->size; + action->encap.item_count = item_count; + action->flow_actions[aidx].conf = &action->encap; + } + break; + + case RTE_FLOW_ACTION_TYPE_QUEUE: { + const struct rte_flow_action_queue *queue = + (const struct rte_flow_action_queue *)actions[aidx].conf; + action->queue.index = queue->index + queue_offset; + action->flow_actions[aidx].conf = &action->queue; + } + break; + + default: { + action->flow_actions[aidx].conf = actions[aidx].conf; + } + break; + } + + aidx++; + + if (aidx == max_elem) + return -1; + } + + } while (type >= 0 && type != RTE_FLOW_ITEM_TYPE_END); + return (type >= 0) ? 0 : -1; } -- 2.45.0