From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-eopbgr10081.outbound.protection.outlook.com [40.107.1.81]) by dpdk.org (Postfix) with ESMTP id BEFC97D19 for ; Tue, 2 Jan 2018 11:51:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=E2LuyAL5nRwK0Cy0Gqo1ZqxhRRP+qFQxUtnx4JEwpio=; b=FefaEd900Y29oAELMB0VE9OaSaSGBqUtSpES4O1gkTcOkZeZY/H6be3kT63Wn21WhXprC0wEG9Ns7FnXAn9S9fKr8m2qQFzuAumbMGzffViI3vSvnjLhHt5u/DmHIMEvCcq46AoQdfIBXZlhuMNGytanJIQLSQ6VIZKzN/UBaE0= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=borisp@mellanox.com; Received: from [10.8.8.16] (193.47.165.251) by DB6PR0501MB2232.eurprd05.prod.outlook.com (2603:10a6:4:4b::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.366.8; Tue, 2 Jan 2018 10:51:01 +0000 References: <3560e76a-c99b-4dc3-9678-d7975acf67c9@mellanox.com> To: "dev@dpdk.org" Cc: "Doherty, Declan Shahaf Shuler" From: Boris Pismenny X-Forwarded-Message-Id: <3560e76a-c99b-4dc3-9678-d7975acf67c9@mellanox.com> Message-ID: Date: Tue, 2 Jan 2018 12:50:50 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <3560e76a-c99b-4dc3-9678-d7975acf67c9@mellanox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [193.47.165.251] X-ClientProxiedBy: AM0PR0102CA0041.eurprd01.prod.exchangelabs.com (2603:10a6:208::18) To DB6PR0501MB2232.eurprd05.prod.outlook.com (2603:10a6:4:4b::25) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 7fa71de3-7e8f-4b90-69a6-08d551ceb719 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(5600026)(4604075)(2017052603307)(7153060); SRVR:DB6PR0501MB2232; X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2232; 3:FESQzpRDPxzAcgHw6w4KPw+pQ3oMMWCqz4aoAmuHjR3P+ryVkKf/M+uySkjuiluaCi8D+D68DcjSUiWw9dlHEofMKaaSLtd5CGVCBBxhH/wRQZOAWpu1nMQFiTau8QQauqaXDXq44DR/RBZr7nug0n3KVNj8EkolNFQ+X/55Yb3qoouxdrPQYHDWtYW3u7oY5OJ4yPp+33rsJxmG1mFICW0P5rTg5M1mbpCaIirAZ9fIio58ttXQTtXci/fQKYN2; 25:tpfgNQAH82M2sRC1pJ8WbHL0eXX5JPF53zF+NJUt4gOz/NMeMBvwwQlAqb7pnWGgNWymhB0D4CjkPH6t06Yz51Ix61hB5NIZLbh/knGTheiusYCVQLfgtnze1kZgru+bW7U1oji/U8F/rdMlvtkM2SUA+mS7368Dsthib6RFENf38QYPYYfjFJO9H/zBzCFXdMGNXJzvDjK9G0+HYOt7gFWnPgsY3C+dVAdgYx9aJ/yoIqCYi1aH02kKkC100KqsczJnm5A7LoVAjQG/G07UebYlz6eohbUbypKxaTWxAIOdPsIW/LjcA7PYbKL7xgA2wvFfYz74Q3eGp6ftgE4baQ==; 31:uiBsQxizQoiq+6xaOdlBqNo3i8wK8eSVy5PS4E2NPTHa4y8mS/76Vb1gBnbSMAOpxilJyzX/rdx5vvQmTTiH0vRlGFbaZz3nneYH5HI5RiwDEeurVjyEgB/l6KsiM2ki1d3qoxzKfDBD4sbTXJU7muPz+AsxIx7WYr24VLXAhSY+8XUN/ZDIP/cHan2PF0bHflgi0CB/f7rYNcbJpg63I3Ojd7R6N9GWIHPeep1yooI= X-MS-TrafficTypeDiagnostic: DB6PR0501MB2232: X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2232; 20:cJZXXPX9W3lPFmEfHH1DQZFn2SFReQzQtQ/gs/7Oxb0X1xZGlBNhP1e6opQry4N/NEauTOrwTQusyejWW1S7+7cYXDUpkACj8LzXnASdvsWEJdv6CwFZPzrjxEAtqDB30A0YbChgoNvK7UDiCFu+YdxssR/Yw2Xs/vtrqhbDuTvCQs6CfTCXOaOCeEwEUt03SUc1PiDXcPXmwVovY61pkDYjETkF+MfkT22ct85pBaljIzaAKSw+m1p6bqb+zfDjyInwOhLLqTvaV+yruulVWUYcRc4jU6tRY4XCn7hPNYTvau//vNEOh8C/9YYxw55Ct6znAbw1kjIFIKjgNrhXyp3T//Jkd+8uBlRh3NSgJHf4dVWeUl8w4sXtdcwf+wasD9xYfBJVhJAeam7JzXU+wHQjOLCr5HF13aE0pX76sDLp/cBvxL1kIfC+c7rlx7T+Cen6Y8KEHrgBT2XuVc79nUpVBkJ4a2u3Q3bexTuCrDJEiLhcZNWkNrwc8b6YKFYP; 4:lz1/rQ8SOwRyINqFBV4RUbZWRCdeN3D4KCHEtVUdaf0vmjP1GZi9LOfq4oFhHdYw56gFOd0fwHS9eKWFpVM9R8BllNk+3HpiC5q3FXpmznB0BJkP3YSMMBm14fzPdlHHYvSjm2auPyMgTJoMs4B46xYlBhOHHtGU/lvxlmNnYJVn6VM12/MWF1RVaVLAqG3wGjBLjfCMrt2p/QRX5SkIiaJZeXJz21hpREbfGf84W+H+mB5Zv3/Vz9KOKRbjZjIS/bTdVPxL5dlmrAawxA5LgXIkOeiSfWESItkzwd8PnMkK8CUVaQ/frEM7L/fNh2lZSzBZ9fgpMYTRod9nrFq7aH7TmhGIgVr7dYzybc69BDAni+ZXySmn02BVlCZl/te+ X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(72170088055959)(192374486261705)(131327999870524); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3002001)(3231023)(944501075)(10201501046)(93006095)(93001095)(6055026)(6041268)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123558120)(20161123560045)(20161123562045)(6072148)(201708071742011); SRVR:DB6PR0501MB2232; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DB6PR0501MB2232; X-Forefront-PRVS: 0540846A1D X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6049001)(396003)(39380400002)(39860400002)(346002)(366004)(376002)(189003)(199004)(51234002)(24454002)(5660300001)(83506002)(58126008)(4326008)(31696002)(53936002)(16576012)(65956001)(316002)(64126003)(31686004)(50466002)(67846002)(25786009)(3260700006)(2501003)(16526018)(65806001)(65826007)(66066001)(105586002)(2351001)(106356001)(86362001)(1730700003)(229853002)(8936002)(6666003)(81156014)(36756003)(478600001)(6486002)(230700001)(305945005)(5640700003)(81166006)(6916009)(52116002)(23676004)(52146003)(2486003)(386003)(2906002)(561944003)(53546011)(2950100002)(107886003)(97736004)(68736007)(47776003)(8676002)(77096006)(59450400001)(6246003)(76176011)(7736002)(6116002)(3846002)(87944003); DIR:OUT; SFP:1101; SCL:1; SRVR:DB6PR0501MB2232; H:[10.8.8.16]; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtEQjZQUjA1MDFNQjIyMzI7MjM6N2dNajV1cVlZZHpNMmo4L1FEK201Y1Fu?= =?utf-8?B?enB3ZTQzTjFmMUJScVQ2WC9HWnE0VFNDMWIxT0IrRVh3ZjNDaTFsQkNTK0Zk?= =?utf-8?B?MnlNdGYyTTgrdEZER1RQMFF5cHhtL2VQQ2pEaWRoQ3Q3NFJYQ3RwTE9qWk1J?= =?utf-8?B?SFFIdGtYY0xla0NsMExuamdiR3VuNktnQnBtNlFSd0tlUmJvYkRnQWhTbmNu?= =?utf-8?B?cXFBZ0tyd3F2bmFPTXcvdnVKanRsOVN5Y3JTeUtjaHpjZXFxZ0M4WHd0dC9P?= =?utf-8?B?UWZhdUlqUDdNQjMvVjUwT0lOang5cnBPU2VFc2VPYlVoVk15NnFDMm8zNW01?= =?utf-8?B?Q3RaVVN2Nk1vVVJnVVlVN2IzbmV6S0c0KzZVbnJnT21lY0N6TVNiS0Jlc2Vm?= =?utf-8?B?V0t5VHJJQ2pick1rckhXQ0FVRDAxbDF3Q3JIanRzQlpoRzdnVlN4Tk9rd2p0?= =?utf-8?B?N2lXM0JyN1lvN3diMXF1VUdnbmJmSXRDTnYwNS9ubEZUMkFFVlNVZHRRWFd3?= =?utf-8?B?Z1Z4U05wcXl4WkMwSnNEMlIrK1kwN1U0dHZLWFVDUTd2LzRQTldjRzJJWHJP?= =?utf-8?B?V01hSWUrbzJlWnpJcFoycFlvYXlpMEJ0SCtjd2FBNGZ2aFRYd21seDZCNG5J?= =?utf-8?B?cmxhbis2cW4rbSt0eUlRT1dVZ2hudTFROEw0d0ZQYnlzWmJCWUxhdWVIWmlX?= =?utf-8?B?dlFOcVZIWmovWUtrYll6azNUTkNEaGZkclBrVzlMT3R1c0JCOXpPWTlhQjl1?= =?utf-8?B?bEdrOGdxcXNONTVPckpGUHFCVnpBOXBCVVRsendvempDVUpWVXZlb0k1ZHBF?= =?utf-8?B?MllrQ2FKVGVsUjF4bG5wTDFhQVRpSmFGOEJrbG1BbmE0OTViSkExNXcxdWV1?= =?utf-8?B?SkNxR0duaFRQcE1WQmN6OWJwa3ZBTlVZSnBoSHJFdWk1emsvWSt5cXBCZk5n?= =?utf-8?B?SjJVQjhVUXV2UjViSE8xMjJmOW1EbnpZV0VMTXpMa2RVeFZOLy9rSjRZWHdP?= =?utf-8?B?cTJNQVFQNTNMb3EydjdrU0NXZXl0Ly8wMTNGajZNeGZaYVRWd3hWdGpnTE16?= =?utf-8?B?dy9FVXovR2FHQU5pM0VuNDlaeDNOYnlaRjRrV0szQ0RJUXN2ZEl1T2dGSUJo?= =?utf-8?B?eC9yY1hHYnQ4RENIQ0VFTnYxdzdrenJEYitYUDNiRm9uanh4WERDMHFXZFRr?= =?utf-8?B?akVTUGg5cUN5Wm51YWNRaUpzbnlacmJaOU9EWjVRUmQwNEluV0RjNFhnSWIy?= =?utf-8?B?UnVneEQvdFJVbitRRUI3ZE9rcVBEUWZJNFpRbWNKSmE5Y0ZYbGRpRi9DMkVQ?= =?utf-8?B?K01oYzYwd1FVa1E0K2Vjc3NlajEzOVpFbllkbkRZQnlleWgzd2xKUGh0SGN4?= =?utf-8?B?S1d3MXRyWk82Tm1BazJMbGxBNEhkOHRockFsckRuajVtby9mZC9abDY1Qkox?= =?utf-8?B?M054VndkNWg3dEhYM0NGM05xLy9nTlcrcEs3VVkwWngyNUlDSm1ka3kyVk8v?= =?utf-8?B?bUtwVlVTdFYva2NaRXNhOFNzaWJLQ3d2NVpyQnMrRmNhTFI2QUdNQnJjS08x?= =?utf-8?B?QTBZU0ZPZWc1TVBvM1g0b0pIRnNTYThyaHRweWtzTS9Ha0t2UGRLb3gvdGZY?= =?utf-8?B?QkRXSU5DS3NvMG5ZS2hVcXVhV29uVys1Y1hhaVRRYmJiRkdPalpyY1RpYm5S?= =?utf-8?B?YkVyV2JuYXdic0hsbCtaWVZzR3JhZUVVUmJSNjN4TFMrdVRsNlV5RDBNc25j?= =?utf-8?B?T3Y4bDlwbUFBTXdpSTNyd3IxdGVLV2ZUUXZyYlVsSElSaHVFVTc4TEc1NlNQ?= =?utf-8?B?OHpGelIrbmRuME1hcnBPaW1Ka3BFaXhTMU1MTVlPZkU5bTNxSlppVE5PQnRR?= =?utf-8?B?eWVDMzgrZmNqVXROOVVYdTNBdUVMSWhLSlRreGY2YTFIanpjUytHQ0sxbENt?= =?utf-8?B?V1ZUMXhrc0R1UUppNUNZVXBzb05PWkxpY0lxMHBHWlhBbVIwdUt0WWJmUFJP?= =?utf-8?B?QVh1V1dLREFNU1c1TzZnV05VL21UWG5jWFJMQ3VzT0V2L1N4ZFV4MlJXS3Z0?= =?utf-8?B?cFV2WEY1Y1VuMVhDMzZHZ09DS2hKNktOeUFXSnNac0Z2ZER0WG9Oc2FuQkQ0?= =?utf-8?Q?+yerHc/pQJv5IODi2MQfs7Nnoes5m2bg8SOgVoeaw9ilWi?= X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2232; 6:wRyHTurdE161lcrT6LkmRH2OJ9E5VCyxCXcL3c1t9Gz+YmOXVnTUnv+q6m3h4lbwe5RWmTbVxzpMK9JaBRK47hmVxV6Bybg+OoiISq4V6yVHql+K1EMuFCAceXKp0r6jd+Of8ZfhHh+YrLfPPJT+icQRylb5AqRhogP0b/17m2Gv7iWOZgKYmByxkge8IKoshNc+7hmzofVpl5TskUAomFgjBmMbfLHtA+JoHWfj20RC8wkscbcrUjJ69Gj72MhKTXyN60gaKGrDmUEBaYiw+vi8ExJgIoMlqrIVsfnZqbFHTSuve+COrDp4HjOc6oyJ5nkWx7Hx+0NL+LhiQswUDajap30MrewiiIw46PIsLb4=; 5:Pc7bweA88rJgOOIBOMXBI4j6829eIBPp+nSLFl0E8MqRkOMSnZpvc47zroaCihx6mFGdQdcBRXsB4WJHNOxdzBIB/dhcGFe14A1/WLosoZkPHu/kL8qtAwHajkGIema/DlWygWJtyTuC4HwXUtHG7wxC7Y9FVUYeiRAJu0l1UKk=; 24:FjcGsowGqRYP/CpEDAWTaHbCORJDGyFoh5Oxxl+rZexzaVWZB1KBHeiIiwIQz05FglWDEj3enC7y34bDBHFfOzNWc+qNZc7fIuxqjVCfPQs=; 7:A63MeCfkPd/Y4dzBW3Nc48xuFMKZNyK03SZgJcfdvpGeMDfDzhpPmQ43zVMAtLuTlKsDzDxYbjoJFk7iL8+BX0Omz1MO9wL96EfSpVjWxsuZUoWhBfwlkPK9exHcutXV9YwflcLePvoElRVvzNwKzzHt/SxMrg93CvCmoic0WfDhEE8/MCY2d1S4ZpyDVRvcC2l7Z8kHRvdhUpZu6ifJs185XMtu+K20+dAO9YkJ1yB+BIM7wmqP2+uMXCb/v6yv SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2018 10:51:01.5739 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7fa71de3-7e8f-4b90-69a6-08d551ceb719 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0501MB2232 Subject: Re: [dpdk-dev] [RFC] tunnel endpoint hw acceleration enablement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Jan 2018 10:51:04 -0000 Hi Declan, On 12/22/2017 12:21 AM, Doherty, Declan wrote: > This RFC contains a proposal to add a new tunnel endpoint API to DPDK that when used > in conjunction with rte_flow enables the configuration of inline data path encapsulation > and decapsulation of tunnel endpoint network overlays on accelerated IO devices. > > The proposed new API would provide for the creation, destruction, and > monitoring of a tunnel endpoint in supporting hw, as well as capabilities APIs to allow the > acceleration features to be discovered by applications. > > /** Tunnel Endpoint context, opaque structure */ > struct rte_tep; > > enum rte_tep_type { > RTE_TEP_TYPE_VXLAN = 1, /**< VXLAN Protocol */ > RTE_TEP_TYPE_NVGRE, /**< NVGRE Protocol */ > ... > }; > > /** Tunnel Endpoint Attributes */ > struct rte_tep_attr { > enum rte_type_type type; > > /* other endpoint attributes here */ > } > > /** > * Create a tunnel end-point context as specified by the flow attribute and pattern > * > * @param port_id Port identifier of Ethernet device. > * @param attr Flow rule attributes. > * @param pattern Pattern specification by list of rte_flow_items. > * @return > * - On success returns pointer to TEP context > * - On failure returns NULL > */ > struct rte_tep *rte_tep_create(uint16_t port_id, > struct rte_tep_attr *attr, struct rte_flow_item pattern[]) > > /** > * Destroy an existing tunnel end-point context. All the end-points context > * will be destroyed, so all active flows using tep should be freed before > * destroying context. > * @param port_id Port identifier of Ethernet device. > * @param tep Tunnel endpoint context > * @return > * - On success returns 0 > * - On failure returns 1 > */ > int rte_tep_destroy(uint16_t port_id, struct rte_tep *tep) > > /** > * Get tunnel endpoint statistics > * > * @param port_id Port identifier of Ethernet device. > * @param tep Tunnel endpoint context > * @param stats Tunnel endpoint statistics > * > * @return > * - On success returns 0 > * - On failure returns 1 > */ > Int > rte_tep_stats_get(uint16_t port_id, struct rte_tep *tep, > struct rte_tep_stats *stats) > > /** > * Get ports tunnel endpoint capabilities > * > * @param port_id Port identifier of Ethernet device. > * @param capabilities Tunnel endpoint capabilities > * > * @return > * - On success returns 0 > * - On failure returns 1 > */ > int > rte_tep_capabilities_get(uint16_t port_id, > struct rte_tep_capabilities *capabilities) > > > To direct traffic flows to hw terminated tunnel endpoint the rte_flow API is > enhanced to add a new flow item type. This contains a pointer to the > TEP context as well as the overlay flow id to which the traffic flow is > associated. > > struct rte_flow_item_tep { > struct rte_tep *tep; > uint32_t flow_id; > } > > Also 2 new generic actions types are added encapsulation and decapsulation. > > RTE_FLOW_ACTION_TYPE_ENCAP > RTE_FLOW_ACTION_TYPE_DECAP > > struct rte_flow_action_encap { > struct rte_flow_item *item; > } > > struct rte_flow_action_decap { > struct rte_flow_item *item; > } > > The following section outlines the intended usage of the new APIs and then how > they are combined with the existing rte_flow APIs. > > Tunnel endpoints are created on logical ports which support the capability > using rte_tep_create() using a combination of TEP attributes and > rte_flow_items. In the example below a new IPv4 VxLAN endpoint is being defined. > The attrs parameter sets the TEP type, and could be used for other possible > attributes. > > struct rte_tep_attr attrs = { .type = RTE_TEP_TYPE_VXLAN }; > > The values for the headers which make up the tunnel endpointr are then > defined using spec parameter in the rte flow items (IPv4, UDP and > VxLAN in this case) > > struct rte_flow_item_ipv4 ipv4_item = { > .hdr = { .src_addr = saddr, .dst_addr = daddr } > }; > > struct rte_flow_item_udp udp_item = { > .hdr = { .src_port = sport, .dst_port = dport } > }; > > struct rte_flow_item_vxlan vxlan_item = { .flags = vxlan_flags }; > > struct rte_flow_item pattern[] = { > { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item }, > { .type = RTE_FLOW_ITEM_TYPE_UDP, .spec = &udp_item }, > { .type = RTE_FLOW_ITEM_TYPE_VXLAN, .spec = &vxlan_item }, > { .type = RTE_FLOW_ITEM_TYPE_END } > }; > > The tunnel endpoint can then be create on the port. Whether or not any hw > configuration is required at this point would be hw dependent, but if not > the context for the TEP is available for use in programming flow, so the > application is not forced to redefine the TEP parameters on each flow > addition. > > struct rte_tep *tep = rte_tep_create(port_id, &attrs, pattern); > > Once the tep context is created flows can then be directed to that endpoint for > processing. The following sections will outline how the author envisage flow > programming will work and also how TEP acceleration can be combined with other > accelerations. > > > Ingress TEP decapsulation, mark and forward to queue: > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > The flows definition for TEP decapsulation actions should specify the full > outer packet to be matched at a minimum. The outer packet definition should > match the tunnel definition in the tep context and the tep flow id. This > example shows describes matching on the outer, marking the packet with the > VXLAN VNI and directing to a specified queue of the port. > > Source Packet > > Decapsulate Outer Hdr > / \ decap outer crc > / \ / \ > +-----+------+-----+-------+-----+------+-----+---------+-----+-----------+ > | ETH | IPv4 | UDP | VxLAN | ETH | IPv4 | TCP | PAYLOAD | CRC | OUTER CRC | > +-----+------+-----+-------+-----+------+-----+---------+-----+-----------+ > > /* Flow Attributes/Items Definitions */ > > struct rte_flow_attr attr = { .ingress = 1 }; > > struct rte_flow_item_eth eth_item = { .src = s_addr, .dst = d_addr, .type = ether_type }; > struct rte_flow_item_tep tep_item = { .tep = tep, .id = vni }; > > struct rte_flow_item pattern[] = { > { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item }, > { .type = RTE_FLOW_ITEM_TYPE_TEP, .spec = &tep_item }, > { .type = RTE_FLOW_ITEM_TYPE_END } > }; > > /* Flow Actions Definitions */ > > struct rte_flow_action_decap decap_eth = { > .type = RTE_FLOW_ITEM_TYPE_ETH, > .item = { .src = s_addr, .dst = d_addr, .type = ether_type } > }; > > struct rte_flow_action_decap decap_tep = { > .type = RTE_FLOW_ITEM_TYPE_TEP, > .spec = &tep_item > }; > > struct rte_flow_action_queue queue_action = { .index = qid }; > > struct rte_flow_action_port mark_action = { .index = vni }; > > struct rte_flow_action actions[] = { > { .type = RTE_FLOW_ACTION_TYPE_DECAP, .conf = &decap_eth }, > { .type = RTE_FLOW_ACTION_TYPE_DECAP, .conf = &decap_tep }, > { .type = RTE_FLOW_ACTION_TYPE_MARK, .conf = &mark_action }, > { .type = RTE_FLOW_ACTION_TYPE_QUEUE, .conf = &queue_action }, > { .type = RTE_FLOW_ACTION_TYPE_END } > }; I guess the Ethernet header is kept separate so that it would be possible to update it separately? But, I don't know of anyway to update a specific rte_flow pattern. Maybe it would be best to combine it with the rest of the TEP and add an update TEP command? > > /** VERY IMPORTANT NOTE **/ > One of the core concepts of this proposal is that actions which modify the > packet are defined in the order which they are to be processed. So first decap > outer ethernet header, then the outer TEP headers. > I think this is not only logical from a usability point of view, it should also > simplify the logic required in PMDs to parse the desired actions. This makes a lot of sense when dealing with encap/decap. Maybe it would be best to add a new bit from the reserved field in rte_flow_attr to express this. Something like this: struct rte_flow_attr { uint32_t group; /**< Priority group. */ uint32_t priority; /**< Priority level within group. */ uint32_t ingress:1; /**< Rule applies to ingress traffic. */ uint32_t egress:1; /**< Rule applies to egress traffic. */ uint32_t inorder:1; /**< Actions are applied in order. */ uint32_t reserved:29; /**< Reserved, must be zero. */ }; > > struct rte_flow *flow = > rte_flow_create(port_id, &attr, pattern, actions, &err); > > The processed packets are delivered to specifed queue with mbuf metadata > denoting marked flow id and with mbuf ol_flags PKT_RX_TEP_OFFLOAD set. > > +-----+------+-----+---------+-----+ > | ETH | IPv4 | TCP | PAYLOAD | CRC | > +-----+------+-----+---------+-----+ > > > Ingress TEP decapsulation switch to port: > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > This is intended to represent how a TEP decapsulation could be configured > in a switching offload case, it makes an assumption that there is a logical > port representation for all ports on the hw switch in the DPDK application, > but similar functionality could be achieved by specifying something like a > VF ID of the device. > > Like the previous scenario the flows definition for TEP decapsulation actions > should specify the full outer packet to be matched at a minimum but also > define the elements of the inner match to match against including masks if > required. Why is the inner specification necessary? What if I'd like to decapsulate all VXLAN traffic of some specification? > > struct rte_flow_attr attr = { .ingress = 1 }; > > struct rte_flow_item pattern[] = { > { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = &outer_eth_item }, > { .type = RTE_FLOW_ITEM_TYPE_TEP, .spec = &outer_tep_item, .mask = &tep_mask }, > { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = &inner_eth_item, .mask = ð_mask } > { .type = RTE_FLOW_ITEM_TYPE_IPv4, .spec = &inner_ipv4_item, .mask = &ipv4_mask }, > { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &inner_tcp_item, .mask = &tcp_mask }, > { .type = RTE_FLOW_ITEM_TYPE_END } > }; > > /* Flow Actions Definitions */ > > struct rte_flow_action_decap decap_eth = { > .type = RTE_FLOW_ITEM_TYPE_ETH, > .item = { .src = s_addr, .dst = d_addr, .type = ether_type } > }; > > struct rte_flow_action_decap decap_tep = { > .type = RTE_FLOW_ITEM_TYPE_TEP, > .item = &outer_tep_item > }; > > struct rte_flow_action_port port_action = { .index = port_id }; > > struct rte_flow_action actions[] = { > { .type = RTE_FLOW_ACTION_TYPE_DECAP, .conf = &decap_eth }, > { .type = RTE_FLOW_ACTION_TYPE_DECAP, .conf = &decap_tep }, > { .type = RTE_FLOW_ACTION_TYPE_PORT, .conf = &port_action }, > { .type = RTE_FLOW_ACTION_TYPE_END } > }; > > struct rte_flow *flow = rte_flow_create(port_id, &attr, pattern, actions, &err); > > This action will forward the decapsulated packets to another port of the switch > fabric but no information will on the tunnel or the fact that the packet was > decapsulated will be passed with it, thereby enable segregation of the > infrastructure and > > > Egress TEP encapsulation: > ~~~~~~~~~~~~~~~~~~~~~~~~~ > > Encapulsation TEP actions require the flow definitions for the source packet > and then the actions to do on that, this example shows a ipv4/tcp packet > action. > > Source Packet > > +-----+------+-----+---------+-----+ > | ETH | IPv4 | TCP | PAYLOAD | CRC | > +-----+------+-----+---------+-----+ > > struct rte_flow_attr attr = { .egress = 1 }; > > struct rte_flow_item_eth eth_item = { .src = s_addr, .dst = d_addr, .type = ether_type }; > struct rte_flow_item_ipv4 ipv4_item = { .hdr = { .src_addr = src_addr, .dst_addr = dst_addr } }; > struct rte_flow_item_udp tcp_item = { .hdr = { .src_port = src_port, .dst_port = dst_port } }; > > struct rte_flow_item pattern[] = { > { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = ð_item }, > { .type = RTE_FLOW_ITEM_TYPE_IPV4, .spec = &ipv4_item }, > { .type = RTE_FLOW_ITEM_TYPE_TCP, .spec = &tcp_item }, > { .type = RTE_FLOW_ITEM_TYPE_END } > }; > > /* Flow Actions Definitions */ > > struct rte_flow_action_encap encap_eth = { > .type = RTE_FLOW_ITEM_TYPE_ETH, > .item = { .src = s_addr, .dst = d_addr, .type = ether_type } > }; > > struct rte_flow_action_encap encap_tep = { > .type = RTE_FLOW_ITEM_TYPE_TEP, > .item = { .tep = tep, .id = vni } > }; > struct rte_flow_action_mark port_action = { .index = port_id }; This is the source port_id, where previously it was the destination port_id, right? > > struct rte_flow_action actions[] = { > { .type = RTE_FLOW_ACTION_TYPE_ENCAP, .conf = &encap_tep }, > { .type = RTE_FLOW_ACTION_TYPE_ENCAP, .conf = &encap_eth }, > { .type = RTE_FLOW_ACTION_TYPE_PORT, .conf = &port_action }, > { .type = RTE_FLOW_ACTION_TYPE_END } > } > struct rte_flow *flow = rte_flow_create(port_id, &attr, pattern, actions, &err); > > > encapsulating Outer Hdr > / \ outer crc > / \ / \ > +-----+------+-----+-------+-----+------+-----+---------+-----+-----------+ > | ETH | IPv4 | UDP | VxLAN | ETH | IPv4 | TCP | PAYLOAD | CRC | OUTER CRC | > +-----+------+-----+-------+-----+------+-----+---------+-----+-----------+ > > > > Chaining multiple modification actions eg IPsec and TEP > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > For example the definition for full hw acceleration for an IPsec ESP/Transport > SA encapsulated in a vxlan tunnel would look something like: > > struct rte_flow_action actions[] = { > { .type = RTE_FLOW_ACTION_TYPE_ENCAP, .conf = &encap_tep }, > { .type = RTE_FLOW_ACTION_TYPE_SECURITY, .conf = &sec_session }, > { .type = RTE_FLOW_ACTION_TYPE_ENCAP, .conf = &encap_eth }, > { .type = RTE_FLOW_ACTION_TYPE_END } > } Assuming the actions are ordered.. The order here suggests that the packet looks like: [ETH | IP | UDP | VXLAN | ETH | IP | ESP | payload | ESP TRAILER | CRC] But, the packet below has the ESP header as the outer header. Also, shouldn't the encap_eth action come before the encap_tep action? > > 1. Source Packet > +-----+------+-----+---------+-----+ > | ETH | IPv4 | TCP | PAYLOAD | CRC | > +-----+------+-----+---------+-----+ > > 2. First Action - Tunnel Endpoint Encapsulation > > +------+-----+-------+-----+------+-----+---------+-----+ > | IPv4 | UDP | VxLAN | ETH | IPv4 | TCP | PAYLOAD | CRC | > +------+-----+-------+-----+------+-----+---------+-----+ > > 3. Second Action - IPsec ESP/Transport Security Processing > > +------+-----+-----+-------+-----+------+-----+---------+-----+-------------+ > | IPv4 | ESP | ENCRYPTED PAYLOAD | ESP TRAILER | > +------+-----+-----+-------+-----+------+-----+---------+-----+-------------+ > > 4. Third Action - Outer Ethernet Encapsulation > > +-----+------+-----+-----+-------+-----+------+-----+---------+-----+-------------+-----------+ > | ETH | IPv4 | ESP | ENCRYPTED PAYLOAD | ESP TRAILER | OUTER CRC | > +-----+------+-----+-----+-------+-----+------+-----+---------+-----+-------------+-----------+ > > This example demonstrates the importance of making the interoperation of > actions to be ordered, as in the above example, a security > action can be defined on both the inner and outer packet by simply placing > another security action at the beginning of the action list. > > It also demonstrates the rationale for not collapsing the Ethernet into > the TEP definition as when you have multiple encapsulating actions, all > could potentially be the place where the Ethernet header needs to be > defined. > > With rte_security full protocol offload as presented here we still need someway to provide and update the Ethernet header. Maybe there should be two encap_eth actions in this case. One for the outer and another for the inner?