From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09F38A00C4; Thu, 31 Oct 2019 10:19:42 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 638521C1FC; Thu, 31 Oct 2019 10:19:41 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 1D13A1BEA8 for ; Thu, 31 Oct 2019 10:19:39 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us5.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id C9F83A4005F; Thu, 31 Oct 2019 09:19:36 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Thu, 31 Oct 2019 09:19:30 +0000 To: Viacheslav Ovsiienko , CC: , , , , , Yongseok Koh References: <1572377502-13620-1-git-send-email-viacheslavo@mellanox.com> <1572455548-23420-1-git-send-email-viacheslavo@mellanox.com> <1572455548-23420-2-git-send-email-viacheslavo@mellanox.com> From: Andrew Rybchenko Message-ID: <609368e2-d817-9c6a-6a80-712132a8b6c5@solarflare.com> Date: Thu, 31 Oct 2019 12:19:25 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <1572455548-23420-2-git-send-email-viacheslavo@mellanox.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25012.003 X-TM-AS-Result: No-12.796700-8.000000-10 X-TMASE-MatchedRID: cxtZ8fwm3r/mLzc6AOD8DfHkpkyUphL9Ug5zxCPHJW0LBZEuqIL9SnYM 2AZgEFG9IDlOFNXHl2JNoa95MkRI3N3aQrftNh+s020eWoALA9eD33mO/UVvM/yQXCBzKijh3c0 0lAOLPF6qheC4wkSBZ12LnZPy+jS0No3mPlHYPyJZMZ6MZ0H1UqlmjFq8ZmGOAwAObkR2opa/29 Wp61aSzKhSD2Y6ns60S1di3HawaUzsWv8fuJzENcoT2gD1xc7NDvc/j9oMIgWpUxQxmTD4Qr0MP WdqOnIv+MoLGG0pEZvJFrNI2MIsJLndDTsd4xdzN2lv+25ekmcbAqzdFRyxuBbozYDXkvVAv3Bi qF4kM6U+9p2P6NtC1D69NImaY+R90Ur9uyWdjvrnHrWRxYdtFgEPt+VBx9n9fyI+kcowoiY5mHp hoqRS9uRkPp86RzqFWO6FSsaLhY0QKGgRyk5Seoph1hAtvKZNCwUzSHafu25tw+n+iKWyyBRZed tBfmldqdy0seJH1kcsNBMhvcCRJ/oLRFtw/0Cmk3rl+MaNgxArtU4v33Txwe9FCyScBaYayqsrY 2aMz+fi8zVgXoAltkWL4rBlm20vt7DW3B48kkHdB/CxWTRRu25FeHtsUoHuKAvnD0NZdgWupKkj tc4tszUTS1D9tUhtCqjFH2W4ffmwFMlIPaIBbQ== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--12.796700-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25012.003 X-MDID: 1572513578-XmqlhRieJ9_B Subject: Re: [dpdk-dev] [PATCH v6 1/2] ethdev: extend flow metadata X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/30/19 8:12 PM, Viacheslav Ovsiienko wrote: > Currently, metadata can be set on egress path via mbuf tx_metadata field > with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata. > > This patch extends the metadata feature usability. > > 1) RTE_FLOW_ACTION_TYPE_SET_META > > When supporting multiple tables, Tx metadata can also be set by a rule and > matched by another rule. This new action allows metadata to be set as a > result of flow match. > > 2) Metadata on ingress > > There's also need to support metadata on ingress. Metadata can be set by > SET_META action and matched by META item like Tx. The final value set by > the action will be delivered to application via metadata dynamic field of > mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with > rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper > routines. PKT_RX_DYNF_METADATA flag will be set along with the data. > > The mbuf dynamic field must be registered by calling > rte_flow_dynf_metadata_register() prior to use SET_META action. > > The availability of dynamic mbuf metadata field can be checked > with rte_flow_dynf_metadata_avail() routine. > > If application is going to engage the metadata feature it registers > the metadata dynamic fields, then PMD checks the metadata field > availability and handles the appropriate fields in datapath. > > For loopback/hairpin packet, metadata set on Rx/Tx may or may not be > propagated to the other path depending on hardware capability. > > MARK and METADATA look similar and might operate in similar way, > but not interacting. > > Initially, there were proposed two metadata related actions: > > - RTE_FLOW_ACTION_TYPE_FLAG > - RTE_FLOW_ACTION_TYPE_MARK > > These actions set the special flag in the packet metadata, MARK action > stores some specified value in the metadata storage, and, on the packet > receiving PMD puts the flag and value to the mbuf and applications can > see the packet was threated inside flow engine according to the appropriate > RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some > per-packet information from the flow engine to the application via > receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK > provided. It allows us to extend the flow match pattern with the capability > to match the metadata values set by MARK/FLAG actions on other flows. > > From the datapath point of view, the MARK and FLAG are related to the > receiving side only. It would useful to have the same gateway on the > transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META > was proposed. The application can fill the field in mbuf and this value > will be transferred to some field in the packet metadata inside the flow > engine. It did not matter whether these metadata fields are shared because > of MARK and META items belonged to different domains (receiving and > transmitting) and could be vendor-specific. > > So far, so good, DPDK proposes some entities to control metadata inside > the flow engine and gateways to exchange these values on a per-packet basis > via datapaths. > > As we can see, the MARK and META means are not symmetric, there is absent > action which would allow us to set META value on the transmitting path. > So, the action of type: > > - RTE_FLOW_ACTION_TYPE_SET_META was proposed. > > The next, applications raise the new requirements for packet metadata. > The flow ngines are getting more complex, internal switches are introduced, > multiple ports might be supported within the same flow engine namespace. > From the DPDK points of view, it means the packets might be sent on one > eth_dev port and received on the other one, and the packet path inside > the flow engine entirely belongs to the same hardware device. The simplest > example is SR-IOV with PF, VFs and the representors. And there is a > brilliant opportunity to provide some out-of-band channel to transfer > some extra data from one port to another one, besides the packet data > itself. And applications would like to use this opportunity. > > It is supposed for application to use trials (with rte_flow_validate) > to detect which metadata features (FLAG, MARK, META) actually supported > by PMD and underlying hardware. It might depend on PMD configuration, > system software, hardware settings, etc., and should be detected > in run time. > > Signed-off-by: Yongseok Koh > Signed-off-by: Viacheslav Ovsiienko It is good enough as an experimental feature to try how it goes, so Acked-by: Andrew Rybchenko