From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 095C9A04B7; Wed, 14 Oct 2020 10:44:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 081671DD1F; Wed, 14 Oct 2020 10:43:25 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id B72C11DCD7 for ; Wed, 14 Oct 2020 10:43:22 +0200 (CEST) IronPort-SDR: Z6ljX+eLf1QyMM5QGiPBHV03UN5/D7wFHgVUgdAwbz95npQ79kDe0iJ26/6l+YwTQCaGfoGKjt oaeSRp2o2HzA== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="163432223" X-IronPort-AV: E=Sophos;i="5.77,374,1596524400"; d="scan'208";a="163432223" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 01:43:22 -0700 IronPort-SDR: 2anE/pNQehPPjZ4h0fKXGUmeSH0uHAJteTae85ljQiZbjMQRfFVdzxxzfmFmchBy4fM40r+4wR e9ypoKOBu69g== X-IronPort-AV: E=Sophos;i="5.77,374,1596524400"; d="scan'208";a="299864582" Received: from intel-npg-odc-srv01.cd.intel.com ([10.240.178.136]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 01:43:18 -0700 From: SimonX Lu To: dev@dpdk.org Cc: jia.guo@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com, beilei.xing@intel.com, orika@nvidia.com, Simon Lu Date: Wed, 14 Oct 2020 08:41:28 +0000 Message-Id: <20201014084131.72035-6-simonx.lu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201014084131.72035-1-simonx.lu@intel.com> References: <20201014084131.72035-1-simonx.lu@intel.com> Subject: [dpdk-dev] [PATCH v1 5/8] net/ixgbe: use generic flow command to re-realize mirror rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Simon Lu follow mirror rule to add new action called mirror in flow management, so we can use "flow create * pattern * action mirror *" to replace old "set port * mirror-rule *" command now the example of mirror rule command mapping to flow management command: (in below command, port 0 is PF and port 1-3 is VF): 1) ingress: pf => pf set port 0 mirror-rule 0 uplink-mirror dst-pool 4 on or flow create 0 ingress pattern pf / end actions mirror pf / end 2) egress: pf => pf set port 0 mirror-rule 0 downlink-mirror dst-pool 4 on or flow create 0 egress pattern pf / end actions mirror pf / end 3) ingress: pf => vf 3 set port 0 mirror-rule 0 uplink-mirror dst-pool 3 on or flow create 0 ingress pattern pf / end actions mirror vf id 3 / end 4) egress: pf => vf 3 set port 0 mirror-rule 0 downlink-mirror dst-pool 3 on or flow create 0 egress pattern pf / end actions mirror vf id 3 / end 5) ingress: vf 0,1 => pf set port 0 mirror-rule 0 pool-mirror-up 0x3 dst-pool 4 on or flow create 0 ingress pattern vf id is 0 / end actions mirror pf / end flow create 0 ingress pattern vf id is 1 / end actions mirror pf / end or flow create 0 ingress pattern vf id last 1 / end \ actions mirror pf / end or flow create 0 ingress pattern vf id mask 0x3 / end \ actions mirror pf / end 6) ingress: vf 1,2 => vf 3 set port 0 mirror-rule 0 pool-mirror-up 0x6 dst-pool 3 on or flow create 0 ingress pattern vf id is 1 / end \ actions mirror vf id 3 / end flow create 0 ingress pattern vf id is 2 / end \ actions mirror vf id 3 / end or flow create 0 ingress pattern vf id is 1 id last 2 / end \ actions mirror vf id 3 / end or flow create 0 ingress pattern vf id mask 0x6 / end \ actions mirror vf id 3 / end 7) ingress: vlan 4,6 => vf 3 rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf set port 0 mirror-rule 0 vlan-mirror 4,6 dst-pool 4 on or rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf flow create 0 ingress pattern vlan vid is 4 / end \ actions mirror vf id 3 / end flow create 0 ingress pattern vlan vid is 6 / end \ actions mirror vf id 3 end or rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf flow create 0 ingress pattern vlan vid mask 0x28 / end \ actions mirror vf id 3 / end or rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf flow create 0 ingress pattern vlan vid is 4 vid \ last 6 vid mask 0x5 / end actions mirror vf id 3 / end Signed-off-by: Simon Lu --- drivers/net/ixgbe/ixgbe_flow.c | 217 +++++++++++++++++++++++++++++++++ 1 file changed, 217 insertions(+) diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index 0b0e7c630..7670f6870 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -117,6 +117,7 @@ static struct ixgbe_syn_filter_list filter_syn_list; static struct ixgbe_fdir_rule_filter_list filter_fdir_list; static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list; static struct ixgbe_rss_filter_list filter_rss_list; +static struct ixgbe_mirror_filter_list filter_mirror_list; static struct ixgbe_flow_mem_list ixgbe_flow_list; /** @@ -3170,6 +3171,172 @@ ixgbe_parse_mirror_filter(struct rte_eth_dev *dev, return ixgbe_flow_parse_mirror_action(dev, actions, error, conf); } +static int +ixgbe_config_mirror_filter_add(struct rte_eth_dev *dev, + struct ixgbe_flow_mirror_conf *mirror_conf) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + uint32_t mr_ctl, vlvf; + uint32_t mp_lsb = 0; + uint32_t mv_msb = 0; + uint32_t mv_lsb = 0; + uint32_t mp_msb = 0; + uint8_t i = 0; + int reg_index = 0; + uint64_t vlan_mask = 0; + + const uint8_t pool_mask_offset = 32; + const uint8_t vlan_mask_offset = 32; + const uint8_t dst_pool_offset = 8; + const uint8_t rule_mr_offset = 4; + const uint8_t mirror_rule_mask = 0x0F; + + struct ixgbe_hw *hw = + IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + int8_t rule_id; + uint8_t mirror_type = 0; + + if (ixgbe_vt_check(hw) < 0) + return -ENOTSUP; + + if (IXGBE_INVALID_MIRROR_TYPE(mirror_conf->rule_type)) { + PMD_DRV_LOG(ERR, "unsupported mirror type 0x%x.", + mirror_conf->rule_type); + return -EINVAL; + } + + rule_id = ixgbe_mirror_filter_insert(filter_info, mirror_conf); + if (rule_id < 0) { + PMD_DRV_LOG(ERR, "more than maximum mirror count(%d).", + IXGBE_MAX_MIRROR_RULES); + return -EINVAL; + } + + + if (mirror_conf->rule_type & ETH_MIRROR_VLAN) { + mirror_type |= IXGBE_MRCTL_VLME; + /* Check if vlan id is valid and find conresponding VLAN ID + * index in VLVF + */ + for (i = 0; i < pci_dev->max_vfs; i++) + if (mirror_conf->vlan_mask & (1ULL << i)) { + /* search vlan id related pool vlan filter + * index + */ + reg_index = ixgbe_find_vlvf_slot(hw, + mirror_conf->vlan_id[i], + false); + if (reg_index < 0) + return -EINVAL; + vlvf = IXGBE_READ_REG(hw, + IXGBE_VLVF(reg_index)); + if ((vlvf & IXGBE_VLVF_VIEN) && + ((vlvf & IXGBE_VLVF_VLANID_MASK) == + mirror_conf->vlan_id[i])) { + vlan_mask |= (1ULL << reg_index); + } else { + ixgbe_mirror_filter_remove(filter_info, + mirror_conf->rule_id); + return -EINVAL; + } + } + + mv_lsb = vlan_mask & 0xFFFFFFFF; + mv_msb = vlan_mask >> vlan_mask_offset; + } + + /** + * if enable pool mirror, write related pool mask register,if disable + * pool mirror, clear PFMRVM register + */ + if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) { + mirror_type |= IXGBE_MRCTL_VPME; + mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF; + mp_msb = mirror_conf->pool_mask >> pool_mask_offset; + } + if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT) + mirror_type |= IXGBE_MRCTL_UPME; + if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT) + mirror_type |= IXGBE_MRCTL_DPME; + + /* read mirror control register and recalculate it */ + mr_ctl = IXGBE_READ_REG(hw, IXGBE_MRCTL(rule_id)); + mr_ctl |= mirror_type; + mr_ctl &= mirror_rule_mask; + mr_ctl |= mirror_conf->dst_pool << dst_pool_offset; + + /* write mirrror control register */ + IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl); + + /* write pool mirrror control register */ + if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) { + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb); + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset), + mp_msb); + } + /* write VLAN mirrror control register */ + if (mirror_conf->rule_type & ETH_MIRROR_VLAN) { + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb); + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset), + mv_msb); + } + + return 0; +} + +/* remove the mirror filter */ +static int +ixgbe_config_mirror_filter_del(struct rte_eth_dev *dev, + struct ixgbe_flow_mirror_conf *conf) +{ + struct ixgbe_hw *hw = + IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + uint8_t rule_id = conf->rule_id; + int mr_ctl = 0; + uint32_t lsb_val = 0; + uint32_t msb_val = 0; + const uint8_t rule_mr_offset = 4; + + if (ixgbe_vt_check(hw) < 0) + return -ENOTSUP; + + if (rule_id >= IXGBE_MAX_MIRROR_RULES) + return -EINVAL; + + /* clear PFVMCTL register */ + IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl); + + /* clear pool mask register */ + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), lsb_val); + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset), msb_val); + + /* clear vlan mask register */ + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), lsb_val); + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset), msb_val); + + ixgbe_mirror_filter_remove(filter_info, rule_id); + return 0; +} + +static void +ixgbe_clear_all_mirror_filter(struct rte_eth_dev *dev) +{ + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + int i; + + for (i = 0; i < IXGBE_MAX_MIRROR_RULES; i++) { + if (filter_info->mirror_mask & (1 << i)) { + ixgbe_config_mirror_filter_del(dev, + &filter_info->mirror_filters[i]); + } + } +} + void ixgbe_filterlist_init(void) { @@ -3179,6 +3346,7 @@ ixgbe_filterlist_init(void) TAILQ_INIT(&filter_fdir_list); TAILQ_INIT(&filter_l2_tunnel_list); TAILQ_INIT(&filter_rss_list); + TAILQ_INIT(&filter_mirror_list); TAILQ_INIT(&ixgbe_flow_list); } @@ -3192,6 +3360,7 @@ ixgbe_filterlist_flush(void) struct ixgbe_fdir_rule_ele *fdir_rule_ptr; struct ixgbe_flow_mem *ixgbe_flow_mem_ptr; struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) { TAILQ_REMOVE(&filter_ntuple_list, @@ -3235,6 +3404,13 @@ ixgbe_filterlist_flush(void) rte_free(rss_filter_ptr); } + while ((mirror_filter_ptr = TAILQ_FIRST(&filter_mirror_list))) { + TAILQ_REMOVE(&filter_mirror_list, + mirror_filter_ptr, + entries); + rte_free(mirror_filter_ptr); + } + while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) { TAILQ_REMOVE(&ixgbe_flow_list, ixgbe_flow_mem_ptr, @@ -3266,6 +3442,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, struct ixgbe_hw_fdir_info *fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); struct ixgbe_rte_flow_rss_conf rss_conf; + struct ixgbe_flow_mirror_conf mirror_conf; struct rte_flow *flow = NULL; struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr; struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr; @@ -3273,6 +3450,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr; struct ixgbe_fdir_rule_ele *fdir_rule_ptr; struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; struct ixgbe_flow_mem *ixgbe_flow_mem_ptr; uint8_t first_mask = FALSE; @@ -3495,6 +3673,32 @@ ixgbe_flow_create(struct rte_eth_dev *dev, } } + memset(&mirror_conf, 0, sizeof(struct ixgbe_flow_mirror_conf)); + ret = ixgbe_parse_mirror_filter(dev, attr, pattern, + actions, &mirror_conf, error); + if (ret) { + PMD_DRV_LOG(ERR, "failed to parse mirror filter"); + goto out; + } + + ret = ixgbe_config_mirror_filter_add(dev, &mirror_conf); + if (ret) { + PMD_DRV_LOG(ERR, "failed to add mirror filter"); + goto out; + } + + mirror_filter_ptr = rte_zmalloc("ixgbe_mirror_filter", + sizeof(struct ixgbe_mirror_conf_ele), 0); + if (!mirror_filter_ptr) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + goto out; + } + mirror_filter_ptr->filter_info = mirror_conf; + TAILQ_INSERT_TAIL(&filter_mirror_list, + mirror_filter_ptr, entries); + flow->rule = mirror_filter_ptr; + flow->filter_type = RTE_ETH_FILTER_MIRROR; + return flow; out: TAILQ_REMOVE(&ixgbe_flow_list, ixgbe_flow_mem_ptr, entries); @@ -3586,6 +3790,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, struct ixgbe_hw_fdir_info *fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; switch (filter_type) { case RTE_ETH_FILTER_NTUPLE: @@ -3665,6 +3870,17 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_free(rss_filter_ptr); } break; + case RTE_ETH_FILTER_MIRROR: + mirror_filter_ptr = (struct ixgbe_mirror_conf_ele *) + pmd_flow->rule; + ret = ixgbe_config_mirror_filter_del(dev, + &mirror_filter_ptr->filter_info); + if (!ret) { + TAILQ_REMOVE(&filter_mirror_list, + mirror_filter_ptr, entries); + rte_free(mirror_filter_ptr); + } + break; default: PMD_DRV_LOG(WARNING, "Filter type (%d) not supported", filter_type); @@ -3717,6 +3933,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev, } ixgbe_clear_rss_filter(dev); + ixgbe_clear_all_mirror_filter(dev); ixgbe_filterlist_flush(); -- 2.17.1