From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C653CA0C45; Wed, 22 Sep 2021 20:04:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6297F411EC; Wed, 22 Sep 2021 20:04:49 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2041.outbound.protection.outlook.com [40.107.237.41]) by mails.dpdk.org (Postfix) with ESMTP id 50C6B411A8 for ; Wed, 22 Sep 2021 20:04:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Dxhba18QZE0QCg0UUUkSRM8+rjuHwjRYGUjAqjXO4g7PZv3e/JEE0TW4Rk6+J0jdHSWn7BIiX30JRpotPD7wpy2vqnAfDqbAZK0dr01tMLptMiUb00TCv0BEBqqO6sy/IG4EQbVPJNHmNDDSPKiuJj6hHBuDPcccBeGgdAiakAmRRqITC+zrlAUzA9O/pOxpTVyDOI7kUIks8Ge+A8//SzjQy7vZsZ+NWqh60MT/94A6Bl0fibKE8PFliAGWRZI627/1X2ydvBFvUOyLK6Iwq49VkYPqwp6tebNUQEyOOrzz7LP+vvndLCjgMIAIdJDlUscQOOGQt4jYaBQ7pY8Jcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=db8Z9Av/Ct3V4rqnIBmr63y4HfOfTwmDvXLio31IPJc=; b=kS4lolwOmqD5xehQBJi6Zjcc8+yHUr5x8QuXIEOqgK5BbJ6wM00rIYYg5T79OxFZMNk0NFNkKoQIYvhm7sQLgFyXroZSlhF5jtDg7Nl0U81f3z6cALaDnkj2cggOf85VoNO3NWXCn9MpBxxH5tL9MgdCzrCsjDChkFgU1Q7g4GWP3pQSoKMQ0pqo8NRcX4sLp5z/jzofcQgaeBgTwM9qf7/sC8UcjJra5Ycrdxy6B1KQpJZxXcocY3RPd0YfPWJpuedMDoEU72ruFXValD8RlzZhDezKfsxp2OlCUaWcvYg/LwFEgW0SkLEP1TkXySUFeFJkXVMUn9Z1HgX28b9IsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=db8Z9Av/Ct3V4rqnIBmr63y4HfOfTwmDvXLio31IPJc=; b=Dt0pqbFTWPeV6hc1EbMdN0Ltnmlk+SPolspGOzK2QpznqG1VLd4FtOC+uJ/T253oY8+1dju7ZwE1Z21D8FHePI5whj1kAC4nnnURXzr5DGbqkwdWN5uYA6AlvRDm/EfBbBkdJPuyWso9zwggSdFZgsXxHiwkHzzdGoc8KhhDzKSVHU+7qCN3KTuoqT7JtPw1/YsXd4MNQG3sR3JWp6utZHJrbAp5UUmrZdRxieEIS7/sEZT9Ngrw+Vbg5EIv7WFHJQo/NSNTy4Q1pGtULYSc5SqHnbAyrUEL6qwjvOGLkePUvsHLyIwwpTx6VJnDOdhxaOc9JMkVaKl9XYLRoUHyQg== Received: from DM6PR03CA0033.namprd03.prod.outlook.com (2603:10b6:5:40::46) by MN2PR12MB4550.namprd12.prod.outlook.com (2603:10b6:208:24e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14; Wed, 22 Sep 2021 18:04:46 +0000 Received: from DM6NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:5:40:cafe::ab) by DM6PR03CA0033.outlook.office365.com (2603:10b6:5:40::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.13 via Frontend Transport; Wed, 22 Sep 2021 18:04:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by DM6NAM11FT045.mail.protection.outlook.com (10.13.173.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Wed, 22 Sep 2021 18:04:46 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 22 Sep 2021 18:04:41 +0000 Received: from nvidia.com (172.20.187.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 22 Sep 2021 18:04:39 +0000 From: Viacheslav Ovsiienko To: CC: , , , , , Date: Wed, 22 Sep 2021 21:04:15 +0300 Message-ID: <20210922180418.20663-1-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fdf1dcf4-3b87-4194-5708-08d97df37618 X-MS-TrafficTypeDiagnostic: MN2PR12MB4550: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zMkbj6Ur9gv2zhuy9nDGZw/K6qIegbyqC7VIlYyu2DYFWfWEL6Mj02xSzMmD+fYdr1bP0GO/tanY/GIMOjUUdxCVQaSQ19Edvsr0AlQMhOnm987B8C0VrE14vMrEYL3B3ivDIw2n89HmczuZ5QJ9uzMVtwU0Z459Cno6HWOidf1K4hw6xCIyZvNdyPQ0p7rLVDurPwzzLTPYsz3y1dnQWMSHGgS2eQHkn257BrXc+Vmd9rMoH2mDRvL0eN7BNC7ZScMlzGDAesHMTuQue+2/X+XEDvGHXyP5H4gSxdMFYmpKskyPknQuaqDHKSxJNjWYxEhqqLMsDMN/IP/MKZZgP39x7m+QDgLpuT1ZjZXMCijHPfkzmwbr83Q1HkSxS2fSyO7t9WY1wfL8luPksHBKKrUnNtbtfakwbZIbOAc8ZdPmMuEitGCqlVHEaP0b6Ics2PJxCAFGzLY5eCZCXS29xuKvlI3S5/tgneH/Uwzfnm7fKAePXveeA4QdRRKbzEO3UQQLpV+XciVjYGEQWF/cH4Wsfq1vvP8dBJwjxE+UVKNS5p6y9IZkyoV3OHN9gOUpQWbZRotKnAMVgLwVlTVcsAOGxIp7CbhzVr/3dhra0AQJ/tj5vlZ0Uoep/HZgciBMk1lmSkrPK1nwLSKpqBsRm3S9S+2eEepBbi7p6TpnR5K8udkCKSeraVsIazp3k+EaMnxEWA775TxOVbtakCOGmOLVoSq7YR2qbh1MZ8xbMa+wIVQaqg/vPSCewI+G6xhWpVtpXffoD2MLXppxJe6/oQaFmcRSTuHKyxMNF35Wics= X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(47076005)(30864003)(82310400003)(1076003)(356005)(508600001)(83380400001)(16526019)(6666004)(26005)(2616005)(86362001)(2906002)(6916009)(336012)(966005)(316002)(36906005)(426003)(186003)(4326008)(6286002)(70586007)(54906003)(55016002)(5660300002)(7636003)(70206006)(7696005)(8936002)(36860700001)(8676002)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Sep 2021 18:04:46.0038 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fdf1dcf4-3b87-4194-5708-08d97df37618 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4550 Subject: [dpdk-dev] [PATCH 0/3] ethdev: introduce configurable flexible item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 1. Introduction and Retrospective Nowadays the networks are evolving fast and wide, the network structures are getting more and more complicated, the new application areas are emerging. To address these challenges the new network protocols are continuously being developed, considered by technical communities, adopted by industry and, eventually implemented in hardware and software. The DPDK framework follows the common trends and if we bother to glance at the RTE Flow API header we see the multiple new items were introduced during the last years since the initial release. The new protocol adoption and implementation process is not straightforward and takes time, the new protocol passes development, consideration, adoption, and implementation phases. The industry tries to mitigate and address the forthcoming network protocols, for example, many hardware vendors are implementing flexible and configurable network protocol parsers. As DPDK developers, could we anticipate the near future in the same fashion and introduce the similar flexibility in RTE Flow API? Let's check what we already have merged in our project, and we see the nice raw item (rte_flow_item_raw). At the first glance, it looks superior and we can try to implement a flow matching on the header of some relatively new tunnel protocol, say on the GENEVE header with variable length options. And, under further consideration, we run into the raw item limitations: - only fixed size network header can be represented - the entire network header pattern of fixed format (header field offsets are fixed) must be provided - the search for patterns is not robust (the wrong matches might be triggered), and actually is not supported by existing PMDs - no explicitly specified relations with preceding and following items - no tunnel hint support As the result, implementing the support for tunnel protocols like aforementioned GENEVE with variable extra protocol option with flow raw item becomes very complicated and would require multiple flows and multiple raw items chained in the same flow (by the way, there is no support found for chained raw items in implemented drivers). This RFC introduces the dedicated flex item (rte_flow_item_flex) to handle matches with existing and new network protocol headers in a unified fashion. 2. Flex Item Life Cycle Let's assume there are the requirements to support the new network protocol with RTE Flows. What is given within protocol specification: - header format - header length, (can be variable, depending on options) - potential presence of extra options following or included in the header the header - the relations with preceding protocols. For example, the GENEVE follows UDP, eCPRI can follow either UDP or L2 header - the relations with following protocols. For example, the next layer after tunnel header can be L2 or L3 - whether the new protocol is a tunnel and the header is a splitting point between outer and inner layers The supposed way to operate with flex item: - application defines the header structures according to protocol specification - application calls rte_flow_flex_item_create() with desired configuration according to the protocol specification, it creates the flex item object over specified ethernet device and prepares PMD and underlying hardware to handle flex item. On item creation call PMD backing the specified ethernet device returns the opaque handle identifying the object have been created - application uses the rte_flow_item_flex with obtained handle in the flows, the values/masks to match with fields in the header are specified in the flex item per flow as for regular items (except that pattern buffer combines all fields) - flows with flex items match with packets in a regular fashion, the values and masks for the new protocol header match are taken from the flex items in the flows - application destroys flows with flex items - application calls rte_flow_flex_item_release() as part of ethernet device API and destroys the flex item object in PMD and releases the engaged hardware resources 3. Flex Item Structure The flex item structure is intended to be used as part of the flow pattern like regular RTE flow items and provides the mask and value to match with fields of the protocol item was configured for. struct rte_flow_item_flex { void *handle; uint32_t length; const uint8_t* pattern; }; The handle is some opaque object maintained on per device basis by underlying driver. The protocol header fields are considered as bit fields, all offsets and widths are expressed in bits. The pattern is the buffer containing the bit concatenation of all the fields presented at item configuration time, in the same order and same amount. If byte boundary alignment is needed an application can use a dummy type field, this is just some kind of gap filler. The length field specifies the pattern buffer length in bytes and is needed to allow rte_flow_copy() operations. The approach of multiple pattern pointers and lengths (per field) was considered and found clumsy - it seems to be much suitable for the application to maintain the single structure within the single pattern buffer. 4. Flex Item Configuration The flex item configuration consists of the following parts: - header field descriptors: - next header - next protocol - sample to match - input link descriptors - output link descriptors The field descriptors tell driver and hardware what data should be extracted from the packet and then presented to match in the flows. Each field is a bit pattern. It has width, offset from the header beginning, mode of offset calculation, and offset related parameters. The next header field is special, no data are actually taken from the packet, but its offset is used as pointer to the next header in the packet, in other word the next header offset specifies the size of the header being parsed by flex item. There is one more special field - next protocol, it specifies where the next protocol identifier is contained and packet data sampled from this field will be used to determine the next protocol header type to continue packet parsing. The next protocol field is like eth_type field in MAC2, or proto field in IPv4/v6 headers. The sample fields are used to represent the data be sampled from the packet and then matched with established flows. There are several methods supposed to calculate field offset in runtime depending on configuration and packet content: - FIELD_MODE_FIXED - fixed offset. The bit offset from header beginning is permanent and defined by field_base configuration parameter. - FIELD_MODE_OFFSET - the field bit offset is extracted from other header field (indirect offset field). The resulting field offset to match is calculated from as: field_base + (*field_offset & offset_mask) << field_shift This mode is useful to sample some extra options following the main header with field containing main header length. Also, this mode can be used to calculate offset to the next protocol header, for example - IPv4 header contains the 4-bit field with IPv4 header length expressed in dwords. One more example - this mode would allow us to skip GENEVE header variable length options. - FIELD_MODE_BITMASK - the field bit offset is extracted from other header field (indirect offset field), the latter is considered as bitmask containing some number of one bits, the resulting field offset to match is calculated as: field_base + bitcount(*field_offset & offset_mask) << field_shift This mode would be useful to skip the GTP header and its extra options with specified flags. - FIELD_MODE_DUMMY - dummy field, optionally used for byte boundary alignment in pattern. Pattern mask and data are ignored in the match. All configuration parameters besides field size and offset are ignored. The offset mode list can be extended by vendors according to hardware supported options. The input link configuration section tells the driver after what protocols and at what conditions the flex item can follow. Input link specified the preceding header pattern, for example for GENEVE it can be UDP item specifying match on destination port with value 6081. The flex item can follow multiple header types and multiple input links should be specified. At flow creation type the item with one of input link types should precede the flex item and driver will select the correct flex item settings, depending on actual flow pattern. The output link configuration section tells the driver how to continue packet parsing after the flex item protocol. If multiple protocols can follow the flex item header the flex item should contain the field with next protocol identifier, and the parsing will be continued depending on the data contained in this field in the actual packet. The flex item fields can participate in RSS hash calculation, the dedicated flag is present in field description to specify what fields should be provided for hashing. 5. Flex Item Chaining If there are multiple protocols supposed to be supported with flex items in chained fashion - two or more flex items within the same flow and these ones might be neighbors in pattern - it means the flex items are mutual referencing. In this case, the item that occurred first should be created with empty output link list or with the list including existing items, and then the second flex item should be created referencing the first flex item as input arc. Also, the hardware resources used by flex items to handle the packet can be limited. If there are multiple flex items that are supposed to be used within the same flow it would be nice to provide some hint for the driver that these two or more flex items are intended for simultaneous usage. The fields of items should be assigned with hint indices and these indices from two or more flex items should not overlap (be unique per field). For this case, the driver will try to engage not overlapping hardware resources and provide independent handling of the fields with unique indices. If the hint index is zero the driver assigns resources on its own. 6. Example of New Protocol Handling Let's suppose we have the requirements to handle the new tunnel protocol that follows UDP header with destination port 0xFADE and is followed by MAC header. Let the new protocol header format be like this: struct new_protocol_header { rte_be32 header_length; /* header length in dwords, including options */ rte_be32 specific0; /* some protocol data, no intention */ rte_be32 specific1; /* to match in flows on these fields */ rte_be32 crucial; /* data of interest, match is needed */ rte_be32 options[0]; /* optional protocol data, variable length */ }; The supposed flex item configuration: struct rte_flow_item_flex_field field0 = { .field_mode = FIELD_MODE_DUMMY, /* Affects match pattern only */ .field_size = 96, /* Skip three dwords from the beginning */ }; struct rte_flow_item_flex_field field1 = { .field_mode = FIELD_MODE_FIXED, .field_size = 32, /* Field size is one dword */ .field_base = 96, /* Skip three dwords from the beginning */ }; struct rte_flow_item_udp spec0 = { .hdr = { .dst_port = RTE_BE16(0xFADE), } }; struct rte_flow_item_udp mask0 = { .hdr = { .dst_port = RTE_BE16(0xFFFF), } }; struct rte_flow_item_flex_link link0 = { .item = { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &spec0, .mask = &mask0, }; struct rte_flow_item_flex_conf conf = { .next_header = { .field_mode = FIELD_MODE_OFFSET, .field_base = 0, .offset_base = 0, .offset_mask = 0xFFFFFFFF, .offset_shift = 2 /* Expressed in dwords, shift left by 2 */ }, .sample = { &field0, &field1, }, .sample_num = 2, .input_link[0] = &link0, .input_num = 1 }; Let's suppose we have created the flex item successfully, and PMD returned the handle 0x123456789A. We can use the following item pattern to match the crucial field in the packet with value 0x00112233: struct new_protocol_header spec_pattern = { .crucial = RTE_BE32(0x00112233), }; struct new_protocol_header mask_pattern = { .crucial = RTE_BE32(0xFFFFFFFF), }; struct rte_flow_item_flex spec_flex = { .handle = 0x123456789A .length = sizeiof(struct new_protocol_header), .pattern = &spec_pattern, }; struct rte_flow_item_flex mask_flex = { .length = sizeof(struct new_protocol_header), .pattern = &mask_pattern, }; struct rte_flow_item item_to_match = { .type = RTE_FLOW_ITEM_TYPE_FLEX, .spec = &spec_flex, .mask = &mask_flex, }; 7. Notes: - testpmd and mlx5 PMD parts are coming soon - RFC: http://patches.dpdk.org/project/dpdk/patch/20210806085624.16497-1-viacheslavo@nvidia.com/ Gregory Etelson (2): ethdev: support flow elements with variable length ethdev: implement RTE flex item API Viacheslav Ovsiienko (1): ethdev: introduce configurable flexible item doc/guides/prog_guide/rte_flow.rst | 24 +++ doc/guides/rel_notes/release_21_11.rst | 7 + lib/ethdev/rte_ethdev.h | 1 + lib/ethdev/rte_flow.c | 141 +++++++++++++-- lib/ethdev/rte_flow.h | 228 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 13 ++ lib/ethdev/version.map | 5 + 7 files changed, 406 insertions(+), 13 deletions(-) -- 2.18.1