From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C8DBA0C40 for ; Fri, 11 Jun 2021 09:20:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55B45410F0; Fri, 11 Jun 2021 09:20:16 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 6A03A410F0 for ; Fri, 11 Jun 2021 09:20:14 +0200 (CEST) IronPort-SDR: xWtfuVg0m3QCqeXpi3Hr65zKjjCILamR/hZ8shzlP77D4QQLspfei2G0TO4wk6BLq5f6kOtI38 /8uyE9SvO5QA== X-IronPort-AV: E=McAfee;i="6200,9189,10011"; a="291108190" X-IronPort-AV: E=Sophos;i="5.83,265,1616482800"; d="scan'208";a="291108190" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jun 2021 00:20:14 -0700 IronPort-SDR: OrOMztyZxz8H9aresPLNh7+QQXAZ1nkEx7gDGD5grhmqyWSUeLAllJzbf+iIXyC7Lw2ApcXM5I FjF6ThQBs+Hg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,265,1616482800"; d="scan'208";a="553333207" Received: from npg-dpdk-haiyue-1.sh.intel.com ([10.67.118.197]) by fmsmga001.fm.intel.com with ESMTP; 11 Jun 2021 00:20:10 -0700 From: Haiyue Wang To: stable@dpdk.org Cc: bluca@debian.org, xuemingl@nvidia.com, thomas@monjalon.net, christian.ehrhardt@canonical.com, ktraynor@redhat.com, qi.z.zhang@intel.com, Dan Nowlin , Qiming Yang Date: Fri, 11 Jun 2021 14:58:15 +0800 Message-Id: <20210611065825.47678-9-haiyue.wang@intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210611065825.47678-1-haiyue.wang@intel.com> References: <20210611065825.47678-1-haiyue.wang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] =?utf-8?q?=5BPATCH=C2=A020=2E11_v1_08/18=5D_net/ic?= =?utf-8?q?e/base=3A_update_boost_TCAM_for_DVM?= X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" From: Qi Zhang [ upstream commit f977165db0ba8435269a5e19e0e9239a4b22d140 ] Add code to update boost TCAM entries to enable DVM. This requires enabled DVM entries, and disabling SVM entries. Signed-off-by: Dan Nowlin Signed-off-by: Qi Zhang Acked-by: Qiming Yang --- drivers/net/ice/base/ice_flex_pipe.c | 223 +++++++++++++++++++++++---- drivers/net/ice/base/ice_flex_pipe.h | 1 + drivers/net/ice/base/ice_flex_type.h | 13 ++ drivers/net/ice/base/ice_type.h | 2 + drivers/net/ice/base/ice_vlan_mode.c | 7 + 5 files changed, 213 insertions(+), 33 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index cced7b6352..058694653a 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -7,9 +7,17 @@ #include "ice_protocol_type.h" #include "ice_flow.h" +/* For supporting double VLAN mode, it is necessary to enable or disable certain + * boost tcam entries. The metadata labels names that match the following + * prefixes will be saved to allow enabling double VLAN mode. + */ +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ + /* To support tunneling entries by PF, the package will append the PF number to * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. */ +#define ICE_TNL_PRE "TNL_" static const struct ice_tunnel_type_scan tnls[] = { { TNL_VXLAN, "TNL_VXLAN_PF" }, { TNL_GENEVE, "TNL_GENEVE_PF" }, @@ -452,6 +460,57 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state, return label->name; } +/** + * ice_add_tunnel_hint + * @hw: pointer to the HW structure + * @label_name: label text + * @val: value of the tunnel port boost entry + */ +static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) +{ + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { + u16 i; + + for (i = 0; tnls[i].type != TNL_LAST; i++) { + size_t len = strlen(tnls[i].label_prefix); + + /* Look for matching label start, before continuing */ + if (strncmp(label_name, tnls[i].label_prefix, len)) + continue; + + /* Make sure this label matches our PF. Note that the PF + * character ('0' - '7') will be located where our + * prefix string's null terminator is located. + */ + if ((label_name[len] - '0') == hw->pf_id) { + hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; + hw->tnl.tbl[hw->tnl.count].valid = false; + hw->tnl.tbl[hw->tnl.count].in_use = false; + hw->tnl.tbl[hw->tnl.count].marked = false; + hw->tnl.tbl[hw->tnl.count].boost_addr = val; + hw->tnl.tbl[hw->tnl.count].port = 0; + hw->tnl.count++; + break; + } + } + } +} + +/** + * ice_add_dvm_hint + * @hw: pointer to the HW structure + * @val: value of the boost entry + * @enable: true if entry needs to be enabled, or false if needs to be disabled + */ +static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) +{ + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; + hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; + hw->dvm_upd.count++; + } +} + /** * ice_init_pkg_hints * @hw: pointer to the HW structure @@ -478,40 +537,34 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, &val); - while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { - for (i = 0; tnls[i].type != TNL_LAST; i++) { - size_t len = strlen(tnls[i].label_prefix); + while (label_name) { + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) + /* check for a tunnel entry */ + ice_add_tunnel_hint(hw, label_name, val); - /* Look for matching label start, before continuing */ - if (strncmp(label_name, tnls[i].label_prefix, len)) - continue; + /* check for a dvm mode entry */ + else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) + ice_add_dvm_hint(hw, val, true); - /* Make sure this label matches our PF. Note that the PF - * character ('0' - '7') will be located where our - * prefix string's null terminator is located. - */ - if ((label_name[len] - '0') == hw->pf_id) { - hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; - hw->tnl.tbl[hw->tnl.count].valid = false; - hw->tnl.tbl[hw->tnl.count].in_use = false; - hw->tnl.tbl[hw->tnl.count].marked = false; - hw->tnl.tbl[hw->tnl.count].boost_addr = val; - hw->tnl.tbl[hw->tnl.count].port = 0; - hw->tnl.count++; - break; - } - } + /* check for a svm mode entry */ + else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) + ice_add_dvm_hint(hw, val, false); label_name = ice_enum_labels(NULL, 0, &state, &val); } - /* Cache the appropriate boost TCAM entry pointers */ + /* Cache the appropriate boost TCAM entry pointers for tunnels */ for (i = 0; i < hw->tnl.count; i++) { ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, &hw->tnl.tbl[i].boost_entry); if (hw->tnl.tbl[i].boost_entry) hw->tnl.tbl[i].valid = true; } + + /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ + for (i = 0; i < hw->dvm_upd.count; i++) + ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, + &hw->dvm_upd.tbl[i].boost_entry); } /* Key creation */ @@ -916,26 +969,21 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, } /** - * ice_update_pkg + * ice_update_pkg_no_lock * @hw: pointer to the hardware structure * @bufs: pointer to an array of buffers * @count: the number of buffers in the array - * - * Obtains change lock and updates package. */ -enum ice_status -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +static enum ice_status +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - enum ice_status status; - u32 offset, info, i; - - status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - return status; + enum ice_status status = ICE_SUCCESS; + u32 i; for (i = 0; i < count; i++) { struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); bool last = ((i + 1) == count); + u32 offset, info; status = ice_aq_update_pkg(hw, bh, LE16_TO_CPU(bh->data_end), last, &offset, &info, NULL); @@ -947,6 +995,28 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) } } + return status; +} + +/** + * ice_update_pkg + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains change lock and updates package. + */ +enum ice_status +ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + enum ice_status status; + + status = ice_acquire_change_lock(hw, ICE_RES_WRITE); + if (status) + return status; + + status = ice_update_pkg_no_lock(hw, bufs, count); + ice_release_change_lock(hw); return status; @@ -2122,6 +2192,93 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, return res; } +/** + * ice_upd_dvm_boost_entry + * @hw: pointer to the HW structure + * @entry: pointer to double vlan boost entry info + */ +static enum ice_status +ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) +{ + struct ice_boost_tcam_section *sect_rx, *sect_tx; + enum ice_status status = ICE_ERR_MAX_LIMIT; + struct ice_buf_build *bld; + u8 val, dc, nm; + + bld = ice_pkg_buf_alloc(hw); + if (!bld) + return ICE_ERR_NO_MEMORY; + + /* allocate 2 sections, one for Rx parser, one for Tx parser */ + if (ice_pkg_buf_reserve_section(bld, 2)) + goto ice_upd_dvm_boost_entry_err; + + sect_rx = (struct ice_boost_tcam_section *) + ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM, + ice_struct_size(sect_rx, tcam, 1)); + if (!sect_rx) + goto ice_upd_dvm_boost_entry_err; + sect_rx->count = CPU_TO_LE16(1); + + sect_tx = (struct ice_boost_tcam_section *) + ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM, + ice_struct_size(sect_tx, tcam, 1)); + if (!sect_tx) + goto ice_upd_dvm_boost_entry_err; + sect_tx->count = CPU_TO_LE16(1); + + /* copy original boost entry to update package buffer */ + ice_memcpy(sect_rx->tcam, entry->boost_entry, sizeof(*sect_rx->tcam), + ICE_NONDMA_TO_NONDMA); + + /* re-write the don't care and never match bits accordingly */ + if (entry->enable) { + /* all bits are don't care */ + val = 0x00; + dc = 0xFF; + nm = 0x00; + } else { + /* disable, one never match bit, the rest are don't care */ + val = 0x00; + dc = 0xF7; + nm = 0x08; + } + + ice_set_key((u8 *)§_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key), + &val, NULL, &dc, &nm, 0, sizeof(u8)); + + /* exact copy of entry to Tx section entry */ + ice_memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam), + ICE_NONDMA_TO_NONDMA); + + status = ice_update_pkg_no_lock(hw, ice_pkg_buf(bld), 1); + +ice_upd_dvm_boost_entry_err: + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_set_dvm_boost_entries + * @hw: pointer to the HW structure + * + * Enable double vlan by updating the appropriate boost tcam entries. + */ +enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw) +{ + enum ice_status status; + u16 i; + + for (i = 0; i < hw->dvm_upd.count; i++) { + status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); + if (status) + return status; + } + + return ICE_SUCCESS; +} + /** * ice_create_tunnel * @hw: pointer to the HW structure diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index d4679cc940..61ce092319 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -49,6 +49,7 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, u16 *port); enum ice_status ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port); +enum ice_status ice_set_dvm_boost_entries(struct ice_hw *hw); enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all); bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index); bool diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index 169476369b..dfa3cf3020 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -543,6 +543,19 @@ struct ice_tunnel_table { u16 count; }; +struct ice_dvm_entry { + u16 boost_addr; + u16 enable; + struct ice_boost_tcam_entry *boost_entry; +}; + +#define ICE_DVM_MAX_ENTRIES 48 + +struct ice_dvm_table { + struct ice_dvm_entry tbl[ICE_DVM_MAX_ENTRIES]; + u16 count; +}; + struct ice_pkg_es { __le16 count; __le16 offset; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index a4a4427693..104f5dc101 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -962,6 +962,8 @@ struct ice_hw { /* tunneling info */ struct ice_lock tnl_lock; struct ice_tunnel_table tnl; + /* dvm boost update information */ + struct ice_dvm_table dvm_upd; struct ice_acl_tbl *acl_tbl; struct ice_fd_hw_prof **acl_prof; diff --git a/drivers/net/ice/base/ice_vlan_mode.c b/drivers/net/ice/base/ice_vlan_mode.c index 42bb108928..4a749cb9f1 100644 --- a/drivers/net/ice/base/ice_vlan_mode.c +++ b/drivers/net/ice/base/ice_vlan_mode.c @@ -312,6 +312,13 @@ static enum ice_status ice_set_dvm(struct ice_hw *hw) return status; } + status = ice_set_dvm_boost_entries(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set boost TCAM entries for double VLAN mode, status %d\n", + status); + return status; + } + return ICE_SUCCESS; } -- 2.32.0