DPDK patches and discussions
 help / color / mirror / Atom feed
From: Wenjun Wu <wenjun1.wu@intel.com>
To: dev@dpdk.org, qi.z.zhang@intel.com
Cc: Wenjun Wu <wenjun1.wu@intel.com>
Subject: [dpdk-dev] [PATCH 20.11 4/7] net/ice: fix error set of queue number
Date: Fri, 10 Sep 2021 16:08:18 +0800	[thread overview]
Message-ID: <20210910080821.18718-5-wenjun1.wu@intel.com> (raw)
In-Reply-To: <20210910080821.18718-1-wenjun1.wu@intel.com>

This patch is not for LTS upstream, just for users to cherry-pick.

The queue number actually applied should be the maximum integer power
of 2 less than or equal to min(vsi->nb_qps, ICE_MAX_Q_PER_TC), so we
need to get the most significant 1 bit. However the return value of
function rte_bsf32 is the least significant 1 bit. This patch replaces
the function rte_bsf32 with the function rte_fls_u32 and adds
necessary boundary check.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/net/ice/ice_ethdev.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5a1e775718..ce98477427 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -750,7 +750,7 @@ ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
 				struct ice_aqc_vsi_props *info,
 				uint8_t enabled_tcmap)
 {
-	uint16_t bsf, qp_idx;
+	uint16_t fls, qp_idx;
 
 	/* default tc 0 now. Multi-TC supporting need to be done later.
 	 * Configure TC and queue mapping parameters, for enabled TC,
@@ -762,15 +762,15 @@ ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi,
 	}
 
 	vsi->nb_qps = RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC);
-	bsf = rte_bsf32(vsi->nb_qps);
+	fls = (vsi->nb_qps == 0) ? 0 : rte_fls_u32(vsi->nb_qps) - 1;
 	/* Adjust the queue number to actual queues that can be applied */
-	vsi->nb_qps = 0x1 << bsf;
+	vsi->nb_qps = (vsi->nb_qps == 0) ? 0 : 0x1 << fls;
 
 	qp_idx = 0;
 	/* Set tc and queue mapping with VSI */
 	info->tc_mapping[0] = rte_cpu_to_le_16((qp_idx <<
 						ICE_AQ_VSI_TC_Q_OFFSET_S) |
-					       (bsf << ICE_AQ_VSI_TC_Q_NUM_S));
+					       (fls << ICE_AQ_VSI_TC_Q_NUM_S));
 
 	/* Associate queue number with VSI */
 	info->mapping_flags |= rte_cpu_to_le_16(ICE_AQ_VSI_Q_MAP_CONTIG);
-- 
2.25.1


  parent reply	other threads:[~2021-09-10  8:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-10  8:08 [dpdk-dev] [PATCH 20.11 0/7] backport feature support to DPDK 20.11 Wenjun Wu
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 1/7] net/ice: add priority check for flow filters Wenjun Wu
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 2/7] net/ice: refine flow priority usage Wenjun Wu
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 3/7] net/ice: support 256 queues Wenjun Wu
2021-09-10  8:08 ` Wenjun Wu [this message]
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 5/7] net/ice: support 6-tuple RSS Wenjun Wu
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 6/7] net/ice: add L4 support for QinQ switch filter Wenjun Wu
2021-09-10  8:08 ` [dpdk-dev] [PATCH 20.11 7/7] net/ice/base: support L4 " Wenjun Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210910080821.18718-5-wenjun1.wu@intel.com \
    --to=wenjun1.wu@intel.com \
    --cc=dev@dpdk.org \
    --cc=qi.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).