From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3364A0A02 for ; Mon, 17 May 2021 18:13:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED352410E0; Mon, 17 May 2021 18:13:06 +0200 (CEST) Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by mails.dpdk.org (Postfix) with ESMTP id 23A3640041 for ; Mon, 17 May 2021 18:13:06 +0200 (CEST) Received: from 2.general.paelzer.uk.vpn ([10.172.196.173] helo=Keschdeichel.fritz.box) by youngberry.canonical.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1lifrd-0007qc-LC; Mon, 17 May 2021 16:13:06 +0000 From: Christian Ehrhardt To: Alvin Zhang Cc: Lingli Chen , dpdk stable Date: Mon, 17 May 2021 18:08:10 +0200 Message-Id: <20210517161039.3132619-61-christian.ehrhardt@canonical.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210517161039.3132619-1-christian.ehrhardt@canonical.com> References: <20210517161039.3132619-1-christian.ehrhardt@canonical.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/i40e: fix input set field mask' has been queued to stable release 19.11.9 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 19.11.9 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 05/19/21. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/cpaelzer/dpdk-stable-queue This queued commit can be viewed at: https://github.com/cpaelzer/dpdk-stable-queue/commit/ce1c16ab2255d945f6f493539b7d3e71128d0e76 Thanks. Christian Ehrhardt --- >From ce1c16ab2255d945f6f493539b7d3e71128d0e76 Mon Sep 17 00:00:00 2001 From: Alvin Zhang Date: Mon, 1 Mar 2021 15:06:07 +0800 Subject: [PATCH] net/i40e: fix input set field mask [ upstream commit 2a2fd19d46e598a6816be8f81aa62ec372fd495d ] The absolute field offsets of IPv4 or IPv6 header are related to hardware configuration. The X710 and X722 have different hardware configurations, and users can even modify the hardware configuration. Therefore, The default values cannot be used when calculating mask offset. The following flows can be created on X722 NIC, but the packet will not enter the queue 3: flow create 0 ingress pattern eth / ipv4 proto is 255 / end actions queue index 3 / end pkt = Ether()/IP(ttl=63, proto=255)/Raw('X'*40) flow create 0 ingress pattern eth / ipv4 tos is 50 / udp / end actions queue index 3 / end pkt = Ether()/IP(tos=50)/UDP()/Raw('X'*40) flow create 0 ingress pattern eth / ipv6 tc is 12 / udp / end actions queue index 3 / end pkt = Ether()/IPv6(tc=12,hlim=34,fl=0x98765)/UDP()/Raw('X'*40) flow create 0 ingress pattern eth / ipv6 hop is 34 / end actions queue index 3 / end pkt = Ether()/IPv6(tc=12,hlim=34,fl=0x98765)/Raw('X'*40) This patch read the field offsets from the NIC and return the mask register value. Fixes: 98f055707685 ("i40e: configure input fields for RSS or flow director") Fixes: 92cf7f8ec082 ("i40e: allow filtering on more IP header fields") Signed-off-by: Alvin Zhang Tested-by: Lingli Chen --- drivers/net/i40e/i40e_ethdev.c | 158 +++++++++++++++++++++++++-------- drivers/net/i40e/i40e_ethdev.h | 4 +- drivers/net/i40e/i40e_flow.c | 2 +- 3 files changed, 125 insertions(+), 39 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 2e23631211..79558c8aad 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -201,12 +201,12 @@ #define I40E_TRANSLATE_INSET 0 #define I40E_TRANSLATE_REG 1 -#define I40E_INSET_IPV4_TOS_MASK 0x0009FF00UL -#define I40E_INSET_IPv4_TTL_MASK 0x000D00FFUL -#define I40E_INSET_IPV4_PROTO_MASK 0x000DFF00UL -#define I40E_INSET_IPV6_TC_MASK 0x0009F00FUL -#define I40E_INSET_IPV6_HOP_LIMIT_MASK 0x000CFF00UL -#define I40E_INSET_IPV6_NEXT_HDR_MASK 0x000C00FFUL +#define I40E_INSET_IPV4_TOS_MASK 0x0000FF00UL +#define I40E_INSET_IPV4_TTL_MASK 0x000000FFUL +#define I40E_INSET_IPV4_PROTO_MASK 0x0000FF00UL +#define I40E_INSET_IPV6_TC_MASK 0x0000F00FUL +#define I40E_INSET_IPV6_HOP_LIMIT_MASK 0x0000FF00UL +#define I40E_INSET_IPV6_NEXT_HDR_MASK 0x000000FFUL /* PCI offset for querying capability */ #define PCI_DEV_CAP_REG 0xA4 @@ -219,6 +219,25 @@ /* Bit mask of Extended Tag enable/disable */ #define PCI_DEV_CTRL_EXT_TAG_MASK (1 << PCI_DEV_CTRL_EXT_TAG_SHIFT) +#define I40E_GLQF_PIT_IPV4_START 2 +#define I40E_GLQF_PIT_IPV4_COUNT 2 +#define I40E_GLQF_PIT_IPV6_START 4 +#define I40E_GLQF_PIT_IPV6_COUNT 2 + +#define I40E_GLQF_PIT_SOURCE_OFF_GET(a) \ + (((a) & I40E_GLQF_PIT_SOURCE_OFF_MASK) >> \ + I40E_GLQF_PIT_SOURCE_OFF_SHIFT) + +#define I40E_GLQF_PIT_DEST_OFF_GET(a) \ + (((a) & I40E_GLQF_PIT_DEST_OFF_MASK) >> \ + I40E_GLQF_PIT_DEST_OFF_SHIFT) + +#define I40E_GLQF_PIT_FSIZE_GET(a) (((a) & I40E_GLQF_PIT_FSIZE_MASK) >> \ + I40E_GLQF_PIT_FSIZE_SHIFT) + +#define I40E_GLQF_PIT_BUILD(off, mask) (((off) << 16) | (mask)) +#define I40E_FDIR_FIELD_OFFSET(a) ((a) >> 1) + static int eth_i40e_dev_init(struct rte_eth_dev *eth_dev, void *init_params); static int eth_i40e_dev_uninit(struct rte_eth_dev *eth_dev); static int i40e_dev_configure(struct rte_eth_dev *dev); @@ -9628,49 +9647,116 @@ i40e_translate_input_set_reg(enum i40e_mac_type type, uint64_t input) return val; } +static int +i40e_get_inset_field_offset(struct i40e_hw *hw, uint32_t pit_reg_start, + uint32_t pit_reg_count, uint32_t hdr_off) +{ + const uint32_t pit_reg_end = pit_reg_start + pit_reg_count; + uint32_t field_off = I40E_FDIR_FIELD_OFFSET(hdr_off); + uint32_t i, reg_val, src_off, count; + + for (i = pit_reg_start; i < pit_reg_end; i++) { + reg_val = i40e_read_rx_ctl(hw, I40E_GLQF_PIT(i)); + + src_off = I40E_GLQF_PIT_SOURCE_OFF_GET(reg_val); + count = I40E_GLQF_PIT_FSIZE_GET(reg_val); + + if (src_off <= field_off && (src_off + count) > field_off) + break; + } + + if (i >= pit_reg_end) { + PMD_DRV_LOG(ERR, + "Hardware GLQF_PIT configuration does not support this field mask"); + return -1; + } + + return I40E_GLQF_PIT_DEST_OFF_GET(reg_val) + field_off - src_off; +} + int -i40e_generate_inset_mask_reg(uint64_t inset, uint32_t *mask, uint8_t nb_elem) +i40e_generate_inset_mask_reg(struct i40e_hw *hw, uint64_t inset, + uint32_t *mask, uint8_t nb_elem) { - uint8_t i, idx = 0; - uint64_t inset_need_mask = inset; + static const uint64_t mask_inset[] = { + I40E_INSET_IPV4_PROTO | I40E_INSET_IPV4_TTL, + I40E_INSET_IPV6_NEXT_HDR | I40E_INSET_IPV6_HOP_LIMIT }; static const struct { uint64_t inset; uint32_t mask; - } inset_mask_map[] = { - {I40E_INSET_IPV4_TOS, I40E_INSET_IPV4_TOS_MASK}, - {I40E_INSET_IPV4_PROTO | I40E_INSET_IPV4_TTL, 0}, - {I40E_INSET_IPV4_PROTO, I40E_INSET_IPV4_PROTO_MASK}, - {I40E_INSET_IPV4_TTL, I40E_INSET_IPv4_TTL_MASK}, - {I40E_INSET_IPV6_TC, I40E_INSET_IPV6_TC_MASK}, - {I40E_INSET_IPV6_NEXT_HDR | I40E_INSET_IPV6_HOP_LIMIT, 0}, - {I40E_INSET_IPV6_NEXT_HDR, I40E_INSET_IPV6_NEXT_HDR_MASK}, - {I40E_INSET_IPV6_HOP_LIMIT, I40E_INSET_IPV6_HOP_LIMIT_MASK}, + uint32_t offset; + } inset_mask_offset_map[] = { + { I40E_INSET_IPV4_TOS, I40E_INSET_IPV4_TOS_MASK, + offsetof(struct rte_ipv4_hdr, type_of_service) }, + + { I40E_INSET_IPV4_PROTO, I40E_INSET_IPV4_PROTO_MASK, + offsetof(struct rte_ipv4_hdr, next_proto_id) }, + + { I40E_INSET_IPV4_TTL, I40E_INSET_IPV4_TTL_MASK, + offsetof(struct rte_ipv4_hdr, time_to_live) }, + + { I40E_INSET_IPV6_TC, I40E_INSET_IPV6_TC_MASK, + offsetof(struct rte_ipv6_hdr, vtc_flow) }, + + { I40E_INSET_IPV6_NEXT_HDR, I40E_INSET_IPV6_NEXT_HDR_MASK, + offsetof(struct rte_ipv6_hdr, proto) }, + + { I40E_INSET_IPV6_HOP_LIMIT, I40E_INSET_IPV6_HOP_LIMIT_MASK, + offsetof(struct rte_ipv6_hdr, hop_limits) }, }; - if (!inset || !mask || !nb_elem) + uint32_t i; + int idx = 0; + + assert(mask); + if (!inset) return 0; - for (i = 0, idx = 0; i < RTE_DIM(inset_mask_map); i++) { + for (i = 0; i < RTE_DIM(mask_inset); i++) { /* Clear the inset bit, if no MASK is required, * for example proto + ttl */ - if ((inset & inset_mask_map[i].inset) == - inset_mask_map[i].inset && inset_mask_map[i].mask == 0) - inset_need_mask &= ~inset_mask_map[i].inset; - if (!inset_need_mask) - return 0; + if ((mask_inset[i] & inset) == mask_inset[i]) { + inset &= ~mask_inset[i]; + if (!inset) + return 0; + } } - for (i = 0, idx = 0; i < RTE_DIM(inset_mask_map); i++) { - if ((inset_need_mask & inset_mask_map[i].inset) == - inset_mask_map[i].inset) { - if (idx >= nb_elem) { - PMD_DRV_LOG(ERR, "exceed maximal number of bitmasks"); - return -EINVAL; - } - mask[idx] = inset_mask_map[i].mask; - idx++; + + for (i = 0; i < RTE_DIM(inset_mask_offset_map); i++) { + uint32_t pit_start, pit_count; + int offset; + + if (!(inset_mask_offset_map[i].inset & inset)) + continue; + + if (inset_mask_offset_map[i].inset & + (I40E_INSET_IPV4_TOS | I40E_INSET_IPV4_PROTO | + I40E_INSET_IPV4_TTL)) { + pit_start = I40E_GLQF_PIT_IPV4_START; + pit_count = I40E_GLQF_PIT_IPV4_COUNT; + } else { + pit_start = I40E_GLQF_PIT_IPV6_START; + pit_count = I40E_GLQF_PIT_IPV6_COUNT; + } + + offset = i40e_get_inset_field_offset(hw, pit_start, pit_count, + inset_mask_offset_map[i].offset); + + if (offset < 0) + return -EINVAL; + + if (idx >= nb_elem) { + PMD_DRV_LOG(ERR, + "Configuration of inset mask out of range %u", + nb_elem); + return -ERANGE; } + + mask[idx] = I40E_GLQF_PIT_BUILD((uint32_t)offset, + inset_mask_offset_map[i].mask); + idx++; } return idx; @@ -9724,7 +9810,7 @@ i40e_filter_input_set_init(struct i40e_pf *pf) input_set = i40e_get_default_input_set(pctype); - num = i40e_generate_inset_mask_reg(input_set, mask_reg, + num = i40e_generate_inset_mask_reg(hw, input_set, mask_reg, I40E_INSET_MASK_NUM_REG); if (num < 0) return; @@ -9829,7 +9915,7 @@ i40e_hash_filter_inset_select(struct i40e_hw *hw, inset_reg |= i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, pctype)); input_set |= pf->hash_input_set[pctype]; } - num = i40e_generate_inset_mask_reg(input_set, mask_reg, + num = i40e_generate_inset_mask_reg(hw, input_set, mask_reg, I40E_INSET_MASK_NUM_REG); if (num < 0) return -EINVAL; diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 91d6830a3c..7c97c609f3 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -1276,8 +1276,8 @@ bool is_i40evf_supported(struct rte_eth_dev *dev); int i40e_validate_input_set(enum i40e_filter_pctype pctype, enum rte_filter_type filter, uint64_t inset); -int i40e_generate_inset_mask_reg(uint64_t inset, uint32_t *mask, - uint8_t nb_elem); +int i40e_generate_inset_mask_reg(struct i40e_hw *hw, uint64_t inset, + uint32_t *mask, uint8_t nb_elem); uint64_t i40e_translate_input_set_reg(enum i40e_mac_type type, uint64_t input); void i40e_check_write_reg(struct i40e_hw *hw, uint32_t addr, uint32_t val); void i40e_check_write_global_reg(struct i40e_hw *hw, diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index ed2565e88f..e8a52bdb76 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -2201,7 +2201,7 @@ i40e_flow_set_fdir_inset(struct i40e_pf *pf, !memcmp(&pf->fdir.input_set[pctype], &input_set, sizeof(uint64_t))) return 0; - num = i40e_generate_inset_mask_reg(input_set, mask_reg, + num = i40e_generate_inset_mask_reg(hw, input_set, mask_reg, I40E_INSET_MASK_NUM_REG); if (num < 0) return -EINVAL; -- 2.31.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2021-05-17 17:40:31.913775418 +0200 +++ 0061-net-i40e-fix-input-set-field-mask.patch 2021-05-17 17:40:29.211809791 +0200 @@ -1 +1 @@ -From 2a2fd19d46e598a6816be8f81aa62ec372fd495d Mon Sep 17 00:00:00 2001 +From ce1c16ab2255d945f6f493539b7d3e71128d0e76 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 2a2fd19d46e598a6816be8f81aa62ec372fd495d ] + @@ -35 +36,0 @@ -Cc: stable@dpdk.org @@ -46 +47 @@ -index 9b86bcdc69..09a1402a3f 100644 +index 2e23631211..79558c8aad 100644 @@ -49 +50 @@ -@@ -202,12 +202,12 @@ +@@ -201,12 +201,12 @@ @@ -68 +69 @@ -@@ -220,6 +220,25 @@ +@@ -219,6 +219,25 @@ @@ -94 +95 @@ -@@ -9424,49 +9443,116 @@ i40e_translate_input_set_reg(enum i40e_mac_type type, uint64_t input) +@@ -9628,49 +9647,116 @@ i40e_translate_input_set_reg(enum i40e_mac_type type, uint64_t input) @@ -239 +240 @@ -@@ -9520,7 +9606,7 @@ i40e_filter_input_set_init(struct i40e_pf *pf) +@@ -9724,7 +9810,7 @@ i40e_filter_input_set_init(struct i40e_pf *pf) @@ -248 +249 @@ -@@ -9600,7 +9686,7 @@ i40e_set_hash_inset(struct i40e_hw *hw, uint64_t input_set, +@@ -9829,7 +9915,7 @@ i40e_hash_filter_inset_select(struct i40e_hw *hw, @@ -258 +259 @@ -index 1e8f5d3a87..faf6896fbc 100644 +index 91d6830a3c..7c97c609f3 100644 @@ -261,2 +262,2 @@ -@@ -1458,8 +1458,8 @@ void i40e_set_symmetric_hash_enable_per_port(struct i40e_hw *hw, - uint8_t enable); +@@ -1276,8 +1276,8 @@ bool is_i40evf_supported(struct rte_eth_dev *dev); + @@ -273 +274 @@ -index 3e514d5f38..1ee8959e56 100644 +index ed2565e88f..e8a52bdb76 100644 @@ -276 +277 @@ -@@ -2269,7 +2269,7 @@ i40e_flow_set_fdir_inset(struct i40e_pf *pf, +@@ -2201,7 +2201,7 @@ i40e_flow_set_fdir_inset(struct i40e_pf *pf,