DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Lu, Nannan" <nannan.lu@intel.com>
To: "Zhao1, Wei" <wei.zhao1@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Zhang, Qi Z" <qi.z.zhang@intel.com>, "Peng, Yuan" <yuan.peng@intel.com>
Subject: Re: [dpdk-dev] [PATCH 1/3] net/ice/base: check the number of recipe when in chain
Date: Fri, 10 Apr 2020 02:05:54 +0000	[thread overview]
Message-ID: <9d06fb7728784414ae4995d4a7beac4e@intel.com> (raw)
In-Reply-To: <20200410004157.3032-2-wei.zhao1@intel.com>

Tested-by: Lu, Nannan <nannan.lu@intel.com>

-----Original Message-----
From: Zhao1, Wei 
Sent: Friday, April 10, 2020 8:42 AM
To: dev@dpdk.org
Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Nannan <nannan.lu@intel.com>; Peng, Yuan <yuan.peng@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
Subject: [PATCH 1/3] net/ice/base: check the number of recipe when in chain

when we add some long switch rule, we need check the number of final recipe number, if it is large than ICE_MAX_CHAIN_RECIPE, we should refuse this rule.
For example:

"flow create 0 ingress pattern eth / ipv6 src is CDCD:910A:2222:5498:8475:1111:3900:1536
dst is CDCD:910A:2222:5498:8475:1111:3900:2022
tc is 3 / udp dst is 45 / end actions queue index 2 / end"

This rule will consum 6 recipe, if it is not refused, it will cause the following code over write of lkup_indx and mask.

LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry,
		l_entry) {
	last_chain_entry->fv_idx[i] = entry->chain_idx;
	buf[recps].content.lkup_indx[i] = entry->chain_idx;
	buf[recps].content.mask[i++] = CPU_TO_LE16(0xFFFF);
	..........
}

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b5aa5abd9..c17219274 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -5352,6 +5352,9 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		rm->n_grp_count++;
 	}
 
+	if (rm->n_grp_count > ICE_MAX_CHAIN_RECIPE)
+		return ICE_ERR_MAX_LIMIT;
+
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 							    ICE_MAX_NUM_RECIPES,
 							    sizeof(*tmp));
--
2.19.1


  reply	other threads:[~2020-04-10  2:06 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-10  0:41 [dpdk-dev] [PATCH 0/3] update ice switch base code Wei Zhao
2020-04-10  0:41 ` [dpdk-dev] [PATCH 1/3] net/ice/base: check the number of recipe when in chain Wei Zhao
2020-04-10  2:05   ` Lu, Nannan [this message]
2020-04-10  0:41 ` [dpdk-dev] [PATCH 2/3] net/ice/base: add mask check when find switch recipe Wei Zhao
2020-04-14  3:02   ` Lu, Nannan
2020-04-10  0:41 ` [dpdk-dev] [PATCH 3/3] net/ice/base: force switch to use different recipe for Wei Zhao
2020-04-10  2:40   ` Peng, Yuan
2020-04-10  1:05 ` [dpdk-dev] [PATCH 0/3] update ice switch base code Zhang, Qi Z
2020-04-15  7:52   ` Ye Xiaolong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9d06fb7728784414ae4995d4a7beac4e@intel.com \
    --to=nannan.lu@intel.com \
    --cc=dev@dpdk.org \
    --cc=qi.z.zhang@intel.com \
    --cc=wei.zhao1@intel.com \
    --cc=yuan.peng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).