From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73D71A0032; Fri, 1 Oct 2021 14:11:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ECC4F4067A; Fri, 1 Oct 2021 14:11:01 +0200 (CEST) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by mails.dpdk.org (Postfix) with ESMTP id 52C0640040 for ; Fri, 1 Oct 2021 14:11:01 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id BD8185C008C; Fri, 1 Oct 2021 08:10:59 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Fri, 01 Oct 2021 08:10:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= 6EhDsciyR5Wq4w+8SoI1n5sgQ98vN3yKGxIhets58Sc=; b=CpeFAq67I9a1oFmC mj9kEDjPEsAYF2TOhkL9IdpbC6SjSfGk+SGpmDvFOvD3/nod5x03KUi0s3o5EL4a hORDn8FUzG54y3rC0U4UBjJhLW0qgApe4wutA6iIjUFgI+rvLRRFq9yakYl58tyM 1qSt69uY0uI0mmsBi8PVV1nQuB/NqSmi0dL0vPvI+avgfXscWyFeCrBUlFyBf5Jp VetUD6IVTYsTTAR2wa4pPpUrtXNLigjNwfjBoRyw9azbRpRqvNyXqsgCyg8P/rA3 4jA5sPeaRAnUYb3Z2xQ0ZpizEDY7XyGXfMB0TJ3y1/BuaawkaEDSkxIYoSwJcQ9c wT6CEw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=6EhDsciyR5Wq4w+8SoI1n5sgQ98vN3yKGxIhets58 Sc=; b=UHpoeeCLaEppgD636SkcJ4QxTpUMgskNOd3MB8HXg+nhc+K4aO+uJYPd4 C778xq3e71Gi7n4crngNMafA8L7lJKjSegPYHAyZ9oN3zdtLEQWNv3yu863yeuuN 59mUhHgE2Q9erDPlmCfi8OPJZA/tXDJtN40Ev2n4V6kLInpcTwGWbjoht5HBUuVg w5s8U7OfMQht8FgJqfB3os8VPq6PfY4afvNS0fNZq7+dr5TvbkXdip8dAZ8qaVut 6aAUeMg71CFs3HCoxoNNBH8YlaPO1iSnGy+KD9/IYO4yAGacq3hLCyYRiH7fz8w6 1sZuGeM5jrwgci6aOxsQotUcuKu3g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudekiedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Oct 2021 08:10:58 -0400 (EDT) From: Thomas Monjalon To: Ivan Malov , Andrew Rybchenko Cc: dev@dpdk.org, Andy Moreton , orika@nvidia.com, ferruh.yigit@intel.com, olivier.matz@6wind.com Date: Fri, 01 Oct 2021 14:10:57 +0200 Message-ID: <1769631.gUYDzmcafU@thomas> In-Reply-To: <759a4302-8777-68fd-a0bb-e65c4328cd21@oktetlabs.ru> References: <20210902142359.28138-1-ivan.malov@oktetlabs.ru> <2522405.PTVv94qZMn@thomas> <759a4302-8777-68fd-a0bb-e65c4328cd21@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v3 0/5] A means to negotiate delivery of Rx meta data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 01/10/2021 12:15, Andrew Rybchenko: > On 10/1/21 12:48 PM, Thomas Monjalon wrote: > > 01/10/2021 10:55, Ivan Malov: > >> On 01/10/2021 11:11, Thomas Monjalon wrote: > >>> 01/10/2021 08:47, Andrew Rybchenko: > >>>> On 9/30/21 10:30 PM, Ivan Malov wrote: > >>>>> On 30/09/2021 19:18, Thomas Monjalon wrote: > >>>>>> 23/09/2021 13:20, Ivan Malov: > >>>>>>> Patch [1/5] of this series adds a generic API to let applications > >>>>>>> negotiate delivery of Rx meta data during initialisation period. > >>> > >>> What is a metadata? > >>> Do you mean RTE_FLOW_ITEM_TYPE_META and RTE_FLOW_ITEM_TYPE_MARK? > >>> Metadata word could cover any field in the mbuf struct so it is vague. > >> > >> Metadata here is *any* additional information provided by the NIC for > >> each received packet. For example, Rx flag, Rx mark, RSS hash, packet > >> classification info, you name it. I'd like to stress out that the > >> suggested API comes with flags each of which is crystal clear on what > >> concrete kind of metadata it covers, eg. Rx mark. > > > > I missed the flags. > > You mean these 3 flags? > > Yes > > > +/** The ethdev sees flagged packets if there are flows with action FLAG. */ > > +#define RTE_ETH_RX_META_USER_FLAG (UINT64_C(1) << 0) > > + > > +/** The ethdev sees mark IDs in packets if there are flows with action MARK. */ > > +#define RTE_ETH_RX_META_USER_MARK (UINT64_C(1) << 1) > > + > > +/** The ethdev detects missed packets if there are "tunnel_set" flows in use. */ > > +#define RTE_ETH_RX_META_TUNNEL_ID (UINT64_C(1) << 2) > > > > It is not crystal clear because it does not reference the API, > > like RTE_FLOW_ACTION_TYPE_MARK. > > Thanks, it is easy to fix. Please, note that there is no action > for tunnel ID case. I don't understand the tunnel ID meta. Is it an existing offload? API? > > And it covers a limited set of metadata. > > Yes which are not covered by offloads, packet classification > etc. Anything else? > > > Do you intend to extend to all mbuf metadata? > > No. It should be discussed case-by-case separately. Ah, it makes the intent clearer. Why not planning to do something truly generic? > >>>>>>> This way, an application knows right from the start which parts > >>>>>>> of Rx meta data won't be delivered. Hence, no necessity to try > >>>>>>> inserting flows requesting such data and handle the failures. > >>>>>> > >>>>>> Sorry I don't understand the problem you want to solve. > >>>>>> And sorry for not noticing earlier. > >>>>> > >>>>> No worries. *Some* PMDs do not enable delivery of, say, Rx mark with the > >>>>> packets by default (for performance reasons). If the application tries > >>>>> to insert a flow with action MARK, the PMD may not be able to enable > >>>>> delivery of Rx mark without the need to re-start Rx sub-system. And > >>>>> that's fraught with traffic disruption and similar bad consequences. In > >>>>> order to address it, we need to let the application express its interest > >>>>> in receiving mark with packets as early as possible. This way, the PMD > >>>>> can enable Rx mark delivery in advance. And, as an additional benefit, > >>>>> the application can learn *from the very beginning* whether it will be > >>>>> possible to use the feature or not. If this API tells the application > >>>>> that no mark delivery will be enabled, then the application can just > >>>>> skip many unnecessary attempts to insert wittingly unsupported flows > >>>>> during runtime. > >>> > >>> I'm puzzled, because we could have the same reasoning for any offload. > >> > >> We're not discussing *offloads*. An offload is when NIC *computes > >> something* and *delivers* it. We are discussing precisely *delivery*. > > > > OK but still, there are a lot more mbuf metadata delivered. > > Yes, and some are not controlled yet early enough, and > we do here. > > > > >>> I don't understand why we are focusing on mark only > >> > >> We are not focusing on mark on purpose. It's just how our discussion > >> goes. I chose mark (could've chosen flag or anything else) just to show > >> you an example. > >> > >>> I would prefer we find a generic solution using the rte_flow API. > Can we make rte_flow_validate() working before port start? > >>> If validating a fake rule doesn't make sense, > >>> why not having a new function accepting a single action as parameter? > >> > >> A noble idea, but if we feed the entire flow rule to the driver for > >> validation, then the driver must not look specifically for actions FLAG > >> or MARK in it (to enable or disable metadata delivery). This way, the > >> driver is obliged to also validate match criteria, attributes, etc. And, > >> if something is unsupported (say, some specific item), the driver will > >> have to reject the rule as a whole thus leaving the application to join > >> the dots itself. > >> > >> Say, you ask the driver to validate the following rule: > >> pattern blah-blah-1 / blah-blah-2 / end action flag / end > >> intending to check support for FLAG delivery. Suppose, the driver > >> doesn't support pattern item "blah-blah-1". It will throw an error right > >> after seeing this unsupported item and won't even go further to see the > >> action FLAG. How can application know whether its request for FLAG was > >> heard or not? > > > > No, I'm proposing a new function to validate the action alone, > > without any match etc. > > Example: > > rte_flow_action_request(RTE_FLOW_ACTION_TYPE_MARK) > > When about tunnel ID? > > Also negotiation in terms of bitmask natively allows to > provide everything required at once and it simplifies > implementation in the driver. No dependency on order of > checks etc. Also it allows to renegotiate without any > extra API functions. You mean there is a single function call with all bits set? > >> And I'd not bind delivery of metadata to flow API. Consider the > >> following example. We have a DPDK application sitting at the *host* and > >> we have a *guest* with its *own* DPDK instance. The guest DPDK has asked > >> the NIC (by virtue of flow API) to mark all outgoing packets. This > >> packets reach the *host* DPDK. Say, the host application just wants to > >> see the marked packets from the guest. Its own, (the host's) use of flow > >> API is a don't care here. The host doesn't want to mark packets itself, > >> it wants to see packets marked by the guest. > > > > It does not make sense to me. We are talking about a DPDK API. > > My concern is to avoid redefining new flags > > while we already have rte_flow actions. > > See above.