From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96647A09E9; Mon, 14 Dec 2020 07:53:02 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ED215C9A0; Mon, 14 Dec 2020 07:51:57 +0100 (CET) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 152C0C996 for ; Mon, 14 Dec 2020 07:51:54 +0100 (CET) IronPort-SDR: BkfsTO8ErV4bKd047UduzKVArmkYD7K8Yp39hTBwvR/onTcFe4Bltgk94AIF213cmEnebJfPE9 w4Ra6SRj/RzA== X-IronPort-AV: E=McAfee;i="6000,8403,9834"; a="236251116" X-IronPort-AV: E=Sophos;i="5.78,417,1599548400"; d="scan'208";a="236251116" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2020 22:51:54 -0800 IronPort-SDR: u6iaT9x3nzvvYOoQQk5BY8AcBfU/wJoQzBte9mVfZt9ezrsD2AZJzFFdci3w+7X15TujOxavLU XzHTLtdVDuRw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,417,1599548400"; d="scan'208";a="367141065" Received: from dpdk-junfengguo-v3.sh.intel.com ([10.67.119.146]) by orsmga008.jf.intel.com with ESMTP; 13 Dec 2020 22:51:52 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, junfeng.guo@intel.com, simei.su@intel.com, yahui.cao@intel.com Date: Mon, 14 Dec 2020 14:49:12 +0800 Message-Id: <20201214064913.2306802-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201214064913.2306802-1-junfeng.guo@intel.com> References: <20201214064913.2306802-1-junfeng.guo@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 4/5] net/iavf: support eCPRI MSG TYPE 0 for AVF FDIR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For eCPRI MSG Type 0, ecpriRtcid/ecpriPcid field within the eCPRI header will be extracted to Field Vector for AVF FDIR. SPEC for eCPRI: http://www.cpri.info/downloads/eCPRI_v_2.0_2019_05_10c.pdf Signed-off-by: Junfeng Guo --- drivers/net/iavf/iavf_fdir.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index 7054bde0b9..e92ca17581 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -104,6 +104,9 @@ #define IAVF_FDIR_INSET_PFCP (\ IAVF_INSET_PFCP_S_FIELD) +#define IAVF_FDIR_INSET_ECPRI (\ + IAVF_INSET_ECPRI) + static struct iavf_pattern_match_item iavf_fdir_pattern[] = { {iavf_pattern_ethertype, IAVF_FDIR_INSET_ETH, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, IAVF_INSET_NONE}, @@ -128,6 +131,8 @@ static struct iavf_pattern_match_item iavf_fdir_pattern[] = { {iavf_pattern_eth_ipv6_udp_esp, IAVF_FDIR_INSET_IPV6_NATT_ESP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4_pfcp, IAVF_FDIR_INSET_PFCP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv6_pfcp, IAVF_FDIR_INSET_PFCP, IAVF_INSET_NONE}, + {iavf_pattern_eth_ecpri, IAVF_FDIR_INSET_ECPRI, IAVF_INSET_NONE}, + {iavf_pattern_eth_ipv4_ecpri, IAVF_FDIR_INSET_ECPRI, IAVF_INSET_NONE}, }; static struct iavf_flow_parser iavf_fdir_parser; @@ -469,6 +474,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, const struct rte_flow_item_esp *esp_spec, *esp_mask; const struct rte_flow_item_ah *ah_spec, *ah_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; + const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask; + struct rte_ecpri_common_hdr ecpri_common; uint64_t input_set = IAVF_INSET_NONE; enum rte_flow_item_type next_type; @@ -906,6 +913,31 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + ecpri_spec = item->spec; + ecpri_mask = item->mask; + + ecpri_common.u32 = rte_be_to_cpu_32(ecpri_spec->hdr.common.u32); + + hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ECPRI); + + if (ecpri_spec && ecpri_mask) { + if (ecpri_common.type == RTE_ECPRI_MSG_TYPE_IQ_DATA + && ecpri_mask->hdr.type0.pc_id == UINT16_MAX) { + input_set |= IAVF_ECPRI_PC_RTC_ID; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ECPRI, + PC_RTC_ID); + } + + rte_memcpy(hdr->buffer, ecpri_spec, + sizeof(*ecpri_spec)); + } + + filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + break; + case RTE_FLOW_ITEM_TYPE_VOID: break; -- 2.25.1