From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 571E4A0562; Wed, 14 Apr 2021 15:31:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D61AB161AED; Wed, 14 Apr 2021 15:31:51 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id B175F4013F for ; Wed, 14 Apr 2021 15:31:49 +0200 (CEST) IronPort-SDR: 869XNXoC/6xi5Rd2bDEzWHT8RZXir9Aki7E3DdB9DCjk5i/NWxjmrlywhIkxeIFOPcf+l8/n7e zM8LECk1KUsA== X-IronPort-AV: E=McAfee;i="6200,9189,9954"; a="174136413" X-IronPort-AV: E=Sophos;i="5.82,222,1613462400"; d="scan'208";a="174136413" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2021 06:31:44 -0700 IronPort-SDR: AmlNQ1rHBWG5i2pP1t0ijmZhFgJXKMJeYymm7P1cpNupGD/hpUm/62ztpM3sYQfXHY2ZoAErxE rSdCrW1z9wKA== X-IronPort-AV: E=Sophos;i="5.82,222,1613462400"; d="scan'208";a="461213375" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.205.34]) ([10.213.205.34]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2021 06:31:42 -0700 From: Ferruh Yigit To: Gregory Etelson , orika@nvidia.com Cc: ajit.khaparde@broadcom.com, andrew.rybchenko@oktetlabs.ru, dev@dpdk.org, jerinj@marvell.com, jerinjacobk@gmail.com, olivier.matz@6wind.com, thomas@monjalon.net, viacheslavo@nvidia.com, matan@nvidia.com, rasland@nvidia.com References: <20210414125700.12940-1-getelson@nvidia.com> <20210414125700.12940-2-getelson@nvidia.com> X-User: ferruhy Message-ID: <881ca2ff-2e2a-53d0-2668-e745694ff448@intel.com> Date: Wed, 14 Apr 2021 14:31:40 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v4 1/2] ethdev: add packet integrity checks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 4/14/2021 2:27 PM, Ferruh Yigit wrote: > On 4/14/2021 1:56 PM, Gregory Etelson wrote: >> From: Ori Kam >> >> Currently, DPDK application can offload the checksum check, >> and report it in the mbuf. >> >> However, as more and more applications are offloading some or all >> logic and action to the HW, there is a need to check the packet >> integrity so the right decision can be taken. >> >> The application logic can be positive meaning if the packet is >> valid jump / do  actions, or negative if packet is not valid >> jump to SW / do actions (like drop)  a, and add default flow >> (match all in low priority) that will direct the miss packet >> to the miss path. >> >> Since currently rte_flow works in positive way the assumption is >> that the positive way will be the common way in this case also. >> >> When thinking what is the best API to implement such feature, >> we need to considure the following (in no specific order): >> 1. API breakage. >> 2. Simplicity. >> 3. Performance. >> 4. HW capabilities. >> 5. rte_flow limitation. >> 6. Flexibility. >> >> First option: Add integrity flags to each of the items. >> For example add checksum_ok to ipv4 item. >> >> Pros: >> 1. No new rte_flow item. >> 2. Simple in the way that on each item the app can see >> what checks are available. >> >> Cons: >> 1. API breakage. >> 2. increase number of flows, since app can't add global rule and >>     must have dedicated flow for each of the flow combinations, for example >>     matching on icmp traffic or UDP/TCP  traffic with IPv4 / IPv6 will >>     result in 5 flows. >> >> Second option: dedicated item >> >> Pros: >> 1. No API breakage, and there will be no for some time due to having >>     extra space. (by using bits) >> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 / >>     IPv6. >> 3. Simplicity application can just look at one place to see all possible >>     checks. >> 4. Allow future support for more tests. >> >> Cons: >> 1. New item, that holds number of fields from different items. >> >> For starter the following bits are suggested: >> 1. packet_ok - means that all HW checks depending on packet layer have >>     passed. This may mean that in some HW such flow should be splited to >>     number of flows or fail. >> 2. l2_ok - all check for layer 2 have passed. >> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have >>     l3 layer this check should fail. >> 4. l4_ok - all check for layer 4 have passed. If packet doesn't >>     have l4 layer this check should fail. >> 5. l2_crc_ok - the layer 2 crc is O.K. >> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the >>     IPv4 checksum will be O.K. but the l3_ok will be 0. it is not >>     possible that checksum will be 0 and the l3_ok will be 1. >> 7. l4_csum_ok - layer 4 checksum is O.K. >> 8. l3_len_OK - check that the reported layer 3 len is smaller than the >>     frame len. >> >> Example of usage: >> 1. check packets from all possible layers for integrity. >>     flow create integrity spec packet_ok = 1 mask packet_ok = 1 ..... >> >> 2. Check only packet with layer 4 (UDP / TCP) >>     flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1 >> >> Signed-off-by: Ori Kam >> --- >>   doc/guides/prog_guide/rte_flow.rst     | 19 ++++++++++ >>   doc/guides/rel_notes/release_21_05.rst |  5 +++ >>   lib/librte_ethdev/rte_flow.h           | 48 ++++++++++++++++++++++++++ >>   3 files changed, 72 insertions(+) >> >> diff --git a/doc/guides/prog_guide/rte_flow.rst >> b/doc/guides/prog_guide/rte_flow.rst >> index e1b93ecedf..4b8723b84c 100644 >> --- a/doc/guides/prog_guide/rte_flow.rst >> +++ b/doc/guides/prog_guide/rte_flow.rst >> @@ -1398,6 +1398,25 @@ Matches a eCPRI header. >>   - ``hdr``: eCPRI header definition (``rte_ecpri.h``). >>   - Default ``mask`` matches nothing, for all eCPRI messages. >> +Item: ``PACKET_INTEGRITY_CHECKS`` >> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> + >> +Matches packet integrity. >> +Some devices require pre-enabling for this item before using it. >> + > > "pre-enabling" may not be clear enough, what about updating it slightly: > > "Some devices require enabling integration checks in HW before using this flow > item." > Indeed even with above it is not clear who should do the enabling, PMD or application, let me try again: "For some devices application needs to enable integration checks in HW before using this flow item." > For the record, the intention here is to highlight that if the requested > integration check is not enabled in HW, creating flow rule will fail. > Application may need to enable the integration check in HW first.