DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
To: Ori Kam <orika@nvidia.com>,
	"ajit.khaparde@broadcom.com" <ajit.khaparde@broadcom.com>,
	"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
	NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	 "olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
Date: Fri, 9 Apr 2021 11:08:41 +0300	[thread overview]
Message-ID: <957123f7-dd1f-fdfd-a51b-d6a22be353bb@oktetlabs.ru> (raw)
In-Reply-To: <DM6PR12MB49877B4E35C64169BAEFA02ED6749@DM6PR12MB4987.namprd12.prod.outlook.com>

On 4/8/21 2:39 PM, Ori Kam wrote:
> Hi Andrew,
> 
> Thanks for your comments.
> 
> PSB,
> 
> Best,
> Ori
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Thursday, April 8, 2021 11:05 AM
>> Subject: Re: [PATCH] ethdev: add packet integrity checks
>>
>> On 4/5/21 9:04 PM, Ori Kam wrote:
>>> Currently, DPDK application can offload the checksum check,
>>> and report it in the mbuf.
>>>
>>> However, as more and more applications are offloading some or all
>>> logic and action to the HW, there is a need to check the packet
>>> integrity so the right decision can be taken.
>>>
>>> The application logic can be positive meaning if the packet is
>>> valid jump / do  actions, or negative if packet is not valid
>>> jump to SW / do actions (like drop)  a, and add default flow
>>> (match all in low priority) that will direct the miss packet
>>> to the miss path.
>>>
>>> Since currenlty rte_flow works in positive way the assumtion is
>>> that the postive way will be the common way in this case also.
>>>
>>> When thinking what is the best API to implement such feature,
>>> we need to considure the following (in no specific order):
>>> 1. API breakage.
>>
>> First of all I disagree that "API breakage" is put as a top
>> priority. Design is a top priority, since it is a long term.
>> aPI breakage is just a short term inconvenient. Of course,
>> others may disagree, but that's my point of view.
>>
> I agree with you, and like I said the order of the list is not
> according to priorities.
> I truly believe that what I'm suggesting is the best design.
> 
> 
>>> 2. Simplicity.
>>> 3. Performance.
>>> 4. HW capabilities.
>>> 5. rte_flow limitation.
>>> 6. Flexability.
>>>
>>> First option: Add integrity flags to each of the items.
>>> For example add checksum_ok to ipv4 item.
>>>
>>> Pros:
>>> 1. No new rte_flow item.
>>> 2. Simple in the way that on each item the app can see
>>> what checks are available.
>>
>> 3. Natively supports various tunnels without any extra
>>    changes in a shared item for all layers.
>>
> Also in the current suggested approach, we have the level member,
> So tunnels are supported by default. If someone wants to check also tunnel
> he just need to add this item again with the right level. (just like with other
> items)

Thanks, missed it. Is it OK to have just one item with
level 1 or 2?

What happens if two items with level 0 and level 1 are
specified, but the packet has no encapsulation?

>>>
>>> Cons:
>>> 1. API breakage.
>>> 2. increase number of flows, since app can't add global rule and
>>>    must have dedicated flow for each of the flow combinations, for example
>>>    matching on icmp traffic or UDP/TCP  traffic with IPv4 / IPv6 will
>>>    result in 5 flows.
>>
>> Could you expand it? Shouldn't HW offloaded flows with good
>> checksums go into dedicated queues where as bad packets go
>> via default path (i.e. no extra rules)?
>>
> I'm not sure what do you mean, in a lot of the cases
> Application will use that to detect valid packets and then
> forward only valid packets down the flow. (check valid jump
> --> on next group decap ....)
> In other cases the app may choose to drop the bad packets or count
> and then drop, maybe sample them to check this is not part of an attack.
> 
> This is what is great about this feature we just give the app
> the ability to offload the sanity checks and be that enables it
> to offload the traffic itself

Please, when you say "increase number of flows... in 5 flows"
just try to express in flow rules in both cases. Just for my
understanding. Since you calculated flows you should have a
real example.

>>>
>>> Second option: dedicated item
>>>
>>> Pros:
>>> 1. No API breakage, and there will be no for some time due to having
>>>    extra space. (by using bits)
>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>>    IPv6.
>>
>> It depends on how bad (or good0 packets are handled.
>>
> Not sure what do you mean,

Again, I hope we understand each other when we talk in terms
of real example and flow rules.

>>> 3. Simplicity application can just look at one place to see all possible
>>>    checks.
>>
>> It is a drawback from my point of view, since IPv4 checksum
>> check is out of IPv4 match item. I.e. analyzing IPv4 you should
>> take a look at 2 different flow items.
>>
> Are you talking from application view point, PMD  or HW?
> From application yes it is true he needs to add one more item
> to the list, (depending on his flows, since he can have just
> one flow that checks all packet like I said and move the good
> ones to a different group and in that group he will match the
> ipv4 item.
> For example:
> ... pattern integrity = valid action jump group 3
> Group 3 pattern .... ipv4 ... actions .....
> Group 3 pattern ....ipv6 .... actions ...
> 
> In any case at worse case it is just adding one more item
> to the flow.
> 
> From PMD/HW extra items doesn't mean extra action in HW
> they can be combined, just like they would have it the
> condition was in the item itself.
> 
>>> 4. Allow future support for more tests.
>>
>> It is the same for both solution since per-item solution
>> can keep reserved bits which may be used in the future.
>>
> Yes I agree, 
> 
>>>
>>> Cons:
>>> 1. New item, that holds number of fields from different items.
>>
>> 2. Not that nice for tunnels.
> 
> Please look at above (not direct ) response since we have the level member
> tunnels are handled very nicely.
> 
>>
>>>
>>> For starter the following bits are suggested:
>>> 1. packet_ok - means that all HW checks depending on packet layer have
>>>    passed. This may mean that in some HW such flow should be splited to
>>>    number of flows or fail.
>>> 2. l2_ok - all check flor layer 2 have passed.
>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>>    l3 layer this check shoudl fail.
>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>>    have l4 layer this check should fail.
>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>>    be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>>    be 0 and the l3_ok will be 0.
>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>>    packet len.
>>>
>>> Example of usage:
>>> 1. check packets from all possible layers for integrity.
>>>    flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>
>>> 2. Check only packet with layer 4 (UDP / TCP)
>>>    flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>>
>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>> ---
>>>  doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
>>>  lib/librte_ethdev/rte_flow.h       | 46
>> ++++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 65 insertions(+)
>>>
>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>>> index aec2ba1..58b116e 100644
>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>>  - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>>  - Default ``mask`` matches nothing, for all eCPRI messages.
>>>
>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> +
>>> +Matches packet integrity.
>>> +
>>> +- ``level``: the encapsulation level that should be checked. level 0 means the
>>> +  default PMD mode (Can be inner most / outermost). value of 1 means
>> outermost
>>> +  and higher value means inner header. See also RSS level.
>>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
>> max
>>> +  layer of the packet.
>>> +  layer of the packet.
>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>> +
>>>  Actions
>>>  ~~~~~~~
>>>
>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>> index 6cc5713..f6888a1 100644
>>> --- a/lib/librte_ethdev/rte_flow.h
>>> +++ b/lib/librte_ethdev/rte_flow.h
>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>>  	 * See struct rte_flow_item_geneve_opt
>>>  	 */
>>>  	RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>> +
>>> +	/**
>>> +	 * [META]
>>> +	 *
>>> +	 * Matches on packet integrity.
>>> +	 *
>>> +	 * See struct rte_flow_item_packet_integrity_checks.
>>> +	 */
>>> +	RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
>>>  };
>>>
>>>  /**
>>> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
>>>  };
>>>  #endif
>>>
>>> +struct rte_flow_item_packet_integrity_checks {
>>> +	uint32_t level;
>>> +	/**< Packet encapsulation level the item should apply to.
>>> +	 * @see rte_flow_action_rss
>>> +	 */
>>> +RTE_STD_C11
>>> +	union {
>>> +		struct {
>>> +			uint64_t packet_ok:1;
>>> +			/** The packet is valid after passing all HW checks. */
>>> +			uint64_t l2_ok:1;
>>> +			/**< L2 layer is valid after passing all HW checks. */
>>> +			uint64_t l3_ok:1;
>>> +			/**< L3 layer is valid after passing all HW checks. */
>>> +			uint64_t l4_ok:1;
>>> +			/**< L4 layer is valid after passing all HW checks. */
>>> +			uint64_t l2_crc_ok:1;
>>> +			/**< L2 layer checksum is valid. */
>>> +			uint64_t ipv4_csum_ok:1;
>>> +			/**< L3 layer checksum is valid. */
>>> +			uint64_t l4_csum_ok:1;
>>> +			/**< L4 layer checksum is valid. */
>>> +			uint64_t l3_len_ok:1;
>>> +			/**< The l3 len is smaller than the packet len. */
>>> +			uint64_t reserved:56;
>>> +		};
>>> +		uint64_t  value;
>>> +	};
>>> +};
>>> +
>>> +#ifndef __cplusplus
>>> +static const struct rte_flow_item_sanity_checks
>>> +	rte_flow_item_sanity_checks_mask = {
>>> +		.value = 0,
>>> +};
>>> +#endif
>>> +
>>>  /**
>>>   * Matching pattern item definition.
>>>   *
>>>
> 


  reply	other threads:[~2021-04-09  8:08 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-05 18:04 Ori Kam
2021-04-06  7:39 ` Jerin Jacob
2021-04-07 10:32   ` Ori Kam
2021-04-07 11:01     ` Jerin Jacob
2021-04-07 22:15       ` Ori Kam
2021-04-08  7:44         ` Jerin Jacob
2021-04-11  4:12           ` Ajit Khaparde
2021-04-11  6:03             ` Ori Kam
2021-04-13 15:16     ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
2021-04-13 15:16       ` [dpdk-dev] [PATCH v3 1/2] ethdev: " Gregory Etelson
2021-04-13 15:16       ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-13 17:15         ` Ferruh Yigit
2021-04-14 12:56     ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
2021-04-14 12:56       ` [dpdk-dev] [PATCH v4 1/2] ethdev: " Gregory Etelson
2021-04-14 13:27         ` Ferruh Yigit
2021-04-14 13:31           ` Ferruh Yigit
2021-04-14 12:57       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:09     ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
2021-04-14 16:09       ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
2021-04-14 17:24         ` Ajit Khaparde
2021-04-15 15:10           ` Ori Kam
2021-04-15 15:25             ` Ajit Khaparde
2021-04-15 16:46         ` Thomas Monjalon
2021-04-16  7:43           ` Ori Kam
2021-04-18  8:15             ` Gregory Etelson
2021-04-18 18:00               ` Thomas Monjalon
2021-04-14 16:09       ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:26       ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Ferruh Yigit
2021-04-18 15:51     ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2021-04-18 15:51       ` [dpdk-dev] [PATCH v6 1/2] ethdev: " Gregory Etelson
2021-04-18 18:11         ` Thomas Monjalon
2021-04-18 19:24           ` Gregory Etelson
2021-04-18 21:30             ` Thomas Monjalon
2021-04-18 15:51       ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19  8:29     ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
2021-04-19  8:29       ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
2021-04-19  8:47         ` Thomas Monjalon
2021-04-19  8:29       ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 11:20       ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Ferruh Yigit
2021-04-19 12:08         ` Gregory Etelson
2021-04-19 12:44     ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2021-04-19 12:44       ` [dpdk-dev] [PATCH v8 1/2] ethdev: " Gregory Etelson
2021-04-19 14:09         ` Ajit Khaparde
2021-04-19 16:34           ` Thomas Monjalon
2021-04-19 17:06             ` Ferruh Yigit
2021-04-19 12:44       ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 14:09         ` Ajit Khaparde
2021-04-08  8:04 ` [dpdk-dev] [PATCH] ethdev: add packet integrity checks Andrew Rybchenko
2021-04-08 11:39   ` Ori Kam
2021-04-09  8:08     ` Andrew Rybchenko [this message]
2021-04-11  6:42       ` Ori Kam
2021-04-11 17:30         ` Ori Kam
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
2021-04-11 17:34   ` [dpdk-dev] [PATCH v2 1/2] ethdev: " Gregory Etelson
2021-04-12 17:36     ` Ferruh Yigit
2021-04-12 19:26       ` Ori Kam
2021-04-12 23:31         ` Ferruh Yigit
2021-04-13  7:12           ` Ori Kam
2021-04-13  8:03             ` Ferruh Yigit
2021-04-13  8:18               ` Ori Kam
2021-04-13  8:30                 ` Ferruh Yigit
2021-04-13 10:21                   ` Ori Kam
2021-04-13 17:28                     ` Ferruh Yigit
2021-04-11 17:34   ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-12 17:49     ` Ferruh Yigit
2021-04-13  7:53       ` Ori Kam
2021-04-13  8:14         ` Ferruh Yigit
2021-04-13 11:36           ` Ori Kam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=957123f7-dd1f-fdfd-a51b-d6a22be353bb@oktetlabs.ru \
    --to=andrew.rybchenko@oktetlabs.ru \
    --cc=ajit.khaparde@broadcom.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerinj@marvell.com \
    --cc=olivier.matz@6wind.com \
    --cc=orika@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).