From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4034DA0567; Tue, 9 Mar 2021 00:05:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 000A922A268; Tue, 9 Mar 2021 00:05:55 +0100 (CET) Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by mails.dpdk.org (Postfix) with ESMTP id 411354069D for ; Tue, 9 Mar 2021 00:05:55 +0100 (CET) Received: by mail-qt1-f169.google.com with SMTP id v3so8917827qtw.4 for ; Mon, 08 Mar 2021 15:05:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=cSnZufGIJhIdwhCxAkdcdUwvJU7jRx7dKkJNifp9WII=; b=dPVOTYYBuAKEzSunjRRb8VBSyp8ovtLadWYhHXnHWNhUSldzFEdGdxeB4v7k8BLVKA YiRe7skyQlLWBjFdi1CJtRFGebyGc+zjWbVEEHoSFzoCPMu/AYVlp/TIQZr8c/XWXofU xC7CMBN/nc7fpSlWcd8jg7Uh3KYcnbAluLuOA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=cSnZufGIJhIdwhCxAkdcdUwvJU7jRx7dKkJNifp9WII=; b=TRLS/uTV2DHv9hgIkMMMou0NZipVSw6rUycH4Etc3ZWxcgBDY7teipxS2DBJi2M3ID lKv/TDlzaDs9ct01LbctRWZpgIVQpJZnQdXWp/c7WBqkkZh6/ENXEniThXxDqQWaYB7e kMMwLOPMwchjiYujYCjI7Vk0qsVT//03IQQTzMtfIcodJeDhHmRGfzgarZljDvKxJy8+ b5hB/l7nzpP7tlIyYkZmGLQkSf/mADLkGyluDVCg2Syjr4CtKZCUclNVF2hQOt3K3mm1 EVVT+M0odE1QgxWNfl+Z7TISc3yloPm0Bw2deBep7yG7UrS1WOtgRzBDBOq58NI4yXsO jurw== X-Gm-Message-State: AOAM530Wc9wHtHclE+E8GoGgyB6m3X8u5Qkgf3BGTYwbwu5mdny80BQI OsBV9/JThJSqCv6sDGn55Lb5SOm1+T9YGn3z08IKoA== X-Google-Smtp-Source: ABdhPJw8/ndFZsKK+r9F6G3yf1To38Ss0mlriQrOX4tFHlT8en4Ya6LNcb4prW8Mc4DwrDK+hf37kRdDoIh6SZ+xSng= X-Received: by 2002:ac8:71d2:: with SMTP id i18mr8192660qtp.371.1615244754367; Mon, 08 Mar 2021 15:05:54 -0800 (PST) MIME-Version: 1.0 References: <1614541699-99345-1-git-send-email-orika@nvidia.com> <1769565.OWqOAu9aEJ@thomas> <7146547.nIQmEXas8S@thomas> In-Reply-To: From: Ajit Khaparde Date: Mon, 8 Mar 2021 15:05:37 -0800 Message-ID: To: Ori Kam Cc: NBU-Contact-Thomas Monjalon , Slava Ovsiienko , "ferruh.yigit@intel.com" , Andrew Rybchenko , "dev@dpdk.org" , "jerinj@marvell.com" Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="000000000000d7015905bd0e79c3" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-dev] [RFC] ethdev: add sanity packet checks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" --000000000000d7015905bd0e79c3 Content-Type: text/plain; charset="UTF-8" On Sun, Mar 7, 2021 at 10:46 AM Ori Kam wrote: > > Hi > > > -----Original Message----- > > From: Thomas Monjalon > > Sent: Thursday, March 4, 2021 12:46 PM > > Subject: Re: [dpdk-dev] [RFC] ethdev: add sanity packet checks > > > > 04/03/2021 11:00, Ori Kam: > > > From: Thomas Monjalon > > > > 28/02/2021 20:48, Ori Kam: > > > > > Currently, DPDK application can offload the checksum check, > > > > > and report it in the mbuf. > > > > > > > > > > However, this approach doesn't work if the traffic > > > > > is offloaded and should not arrive to the application. > > > > > > > > > > This commit introduces rte flow item that enables > > > > > > > > s/rte flow/rte_flow/ > > > > > > > > > > Sure > > > > > > > > matching on the checksum of the L3 and L4 layers, > > > > > in addition to other checks that can determine if > > > > > the packet is valid. > > > > > some of those tests can be packet len, data len, > > > > > unsupported flags, and so on. > > > > > > > > > > The full check is HW dependent. > > > > > > > > What is the "full check"? > > > > How much it is HW dependent? > > > > > > > > > > This also relates to your other comments, > > > Each HW may run different set of checks on the packet, > > > for example one PMD can just check the tcp flags while > > > a different PMD will also check the option. > > > > I'm not sure how an application can rely on > > such a vague definition. > > > Even now we are marking a packet in the mbuf with unknown > in case of some error. > Would a better wording be " The HW detected errors in the packet" > in any case if the app will need to know what is the error it is his > responsibility, this item is just verification for fast path. > If you have better suggestion, I will be very happy to hear. > > > > > > > > + * RTE_FLOW_ITEM_TYPE_SANITY_CHECKS > > > > > + * > > > > > + * Enable matching on packet validity based on HW checks for the L3 and > > L4 > > > > > + * layers. > > > > > + */ > > > > > +struct rte_flow_item_sanity_checks { > > > > > + uint32_t level; > > > > > + /**< Packet encapsulation level the item should apply to. > > > > > + * @see rte_flow_action_rss > > > > > + */ > > > > > +RTE_STD_C11 > > > > > + union { > > > > > + struct { > > > > > > > > Why there is no L2 check? > > > > > > > Our HW doesn't support it. > > > If other HW support it, it should be added. > > > > It would be an ABI breakage. Can we add it day one? > > > Will add reserve, since this is bit field there shouldn't be any > ABI break. > > > > > > + uint32_t l3_ok:1; > > > > > + /**< L3 layer is valid after passing all HW checking. */ > > > > > + uint32_t l4_ok:1; > > > > > + /**< L4 layer is valid after passing all HW checking. */ > > > > > > > > l3_ok and l4_ok looks vague. > > > > What does it cover exactly? > > > > > > > It depends on the HW in question. > > > In our case it checks in case of L3 > > > the header len, and the version. > > > For L4 checking the len. > > > > If we don't know exactly what is checked, > > how an application can rely on it? > > Is it a best effort check? What is the use case? > > > From application point of view that packet is invalid. > it is the app responsibility to understand why. And that it can determine based on the available fields in ol_flags. right? If HW can indicate that the packet integrity is in question, a PMD should be able to set the bits in ol_flags. After that the application should decide what to drop and what to pass. What is missing is the ability for the application to tell the HW/PMD to drop any packet which fails packet integrity checks. I believe generally drop packets when Ethernet CRC check fails. But l3 and l4 errors are left to the application to deal with. If an application wants to save some CPU cycles, it could ask the hardware to drop those packets as well. So one bit to enable/disable this for all packets should be good. In case we still want to pursue this per flow, how about RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS instead of RTE_FLOW_ITEM_TYPE_SANITY_CHECKS > > You can think about it that in any case there might be > different counters for different errors or different actions. > For example, the app can decide that incorrect ipv4 version > should result in packet drop, while incorrect len > may pass. > > Maybe we can list all possible error result, but I'm worried > that we may not cover all of them. On some HW there is just one > bit that marks if the packet is valid or not. > > > > > > > > + uint32_t l3_ok_csum:1; > > > > > + /**< L3 layer checksum is valid. */ > > > > > + uint32_t l4_ok_csum:1; > > > > > + /**< L4 layer checksum is valid. */ > > > > What worries me is that the checksum is separate but other checks > > are in a common bucket. > > I think we should have one field per precise check > > with a way to report what is checked. > > > Pleas see above comment, Current HW doesn't support > such fine grain detection, so adding bit will force the user > to select all of them, in addition since the HW has some internal > checks it is possible that the reject reason will be un related to the > L3, for example some HW may not support more then two vlans > so in case of 3 vlans may report bad L3. > > Best, > Ori, > --000000000000d7015905bd0e79c3--