From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8AC21A04B1; Wed, 9 Sep 2020 04:56:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03CAF1B9B7; Wed, 9 Sep 2020 04:56:05 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 10611255 for ; Wed, 9 Sep 2020 04:56:01 +0200 (CEST) IronPort-SDR: lVgieou/AiwSdNqewGY5g6np9DbFzU4ZobeBBTd25+5aeMdLxc4yAZxTvjYRkM3VPxFVfDTeS0 bEZYhEMT9NDA== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="138305160" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="138305160" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 19:56:00 -0700 IronPort-SDR: 1JYRd1UfN0PHGsnSOpjHmVbWf8casVaQJ1W9oz9O7VnOjCdoDJ/hzyDckKckVUFqxJef0/h6Aa LhwR+MAiyyXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="449035523" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 08 Sep 2020 19:55:56 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, jia.guo@intel.com Date: Wed, 9 Sep 2020 10:54:15 +0800 Message-Id: <20200909025415.6185-1-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable metadata extraction for flexible descriptors in AVF, that would allow network function directly get metadata without additional parsing which would reduce the CPU cost for VFs. The enabling metadata extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/OVS/ MPLS flexible descriptors, and the VF could negotiate the capability of the flexible descriptor with PF and correspondingly configure the specific offload at receiving queues. Signed-off-by: Jeff Guo --- doc/guides/rel_notes/release_20_11.rst | 6 + drivers/net/iavf/Makefile | 1 + drivers/net/iavf/iavf.h | 25 +- drivers/net/iavf/iavf_ethdev.c | 398 +++++++++++++++++++++++++ drivers/net/iavf/iavf_rxtx.c | 230 +++++++++++++- drivers/net/iavf/iavf_rxtx.h | 17 ++ drivers/net/iavf/iavf_vchnl.c | 22 +- drivers/net/iavf/meson.build | 2 + drivers/net/iavf/rte_pmd_iavf.h | 258 ++++++++++++++++ 9 files changed, 937 insertions(+), 22 deletions(-) create mode 100644 drivers/net/iavf/rte_pmd_iavf.h diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index df227a177..3f27bf6fb 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Intel iavf driver.** + + Updated iavf PMD with new features and improvements, including: + + * Added support for flexible descriptor metadata extraction. + Removed Items ------------- diff --git a/drivers/net/iavf/Makefile b/drivers/net/iavf/Makefile index 792cbb7f7..05fcbdc47 100644 --- a/drivers/net/iavf/Makefile +++ b/drivers/net/iavf/Makefile @@ -26,6 +26,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_generic_flow.c SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_fdir.c SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_hash.c +SYMLINK-$(CONFIG_RTE_LIBRTE_ICE_PMD)-include := rte_pmd_iavf.h ifeq ($(CONFIG_RTE_ARCH_X86), y) SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_rxtx_vec_sse.c endif diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 9ad331ee9..3869ded32 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -119,7 +119,7 @@ struct iavf_info { struct virtchnl_vf_resource *vf_res; /* VF resource */ struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */ uint64_t supported_rxdid; - + uint8_t *flex_desc; /* flexible descriptor type for all queues */ volatile enum virtchnl_ops pend_cmd; /* pending command not finished */ uint32_t cmd_retval; /* return value of the cmd response from PF */ uint8_t *aq_resp; /* buffer to store the adminq response from PF */ @@ -153,6 +153,28 @@ struct iavf_info { #define IAVF_MAX_PKT_TYPE 1024 +#define IAVF_MAX_QUEUE_NUM 2048 + +enum iavf_flex_desc_type { + IAVF_FLEX_DESC_NONE, + IAVF_FLEX_DESC_VLAN, + IAVF_FLEX_DESC_IPV4, + IAVF_FLEX_DESC_IPV6, + IAVF_FLEX_DESC_IPV6_FLOW, + IAVF_FLEX_DESC_TCP, + IAVF_FLEX_DESC_OVS, + IAVF_FLEX_DESC_IP_OFFSET, + IAVF_FLEX_DESC_MAX, +}; + +/** + * Cache devargs parse result. + */ +struct iavf_devargs { + uint8_t flex_desc_dflt; + uint8_t flex_desc[IAVF_MAX_QUEUE_NUM]; +}; + /* Structure to store private data for each VF instance. */ struct iavf_adapter { struct iavf_hw hw; @@ -166,6 +188,7 @@ struct iavf_adapter { const uint32_t *ptype_tbl; bool stopped; uint16_t fdir_ref_cnt; + struct iavf_devargs devargs; }; /* IAVF_DEV_PRIVATE_TO */ diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 28ca3fa8f..e722f1f16 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -28,6 +28,52 @@ #include "iavf.h" #include "iavf_rxtx.h" #include "iavf_generic_flow.h" +#include "rte_pmd_iavf.h" + +/* devargs */ +#define IAVF_FLEX_DESC_ARG "flex_desc" + +static const char * const iavf_valid_args[] = { + IAVF_FLEX_DESC_ARG, + NULL +}; + +static const struct rte_mbuf_dynfield iavf_flex_desc_metadata_param = { + .name = "iavf_dynfield_flex_desc_metadata", + .size = sizeof(uint32_t), + .align = __alignof__(uint32_t), + .flags = 0, +}; + +struct iavf_flex_desc_ol_flag { + const struct rte_mbuf_dynflag param; + uint64_t *ol_flag; + bool required; +}; + +static struct iavf_flex_desc_ol_flag iavf_flex_desc_ol_flag_params[] = { + [IAVF_FLEX_DESC_VLAN] = { + .param = { .name = "iavf_dynflag_flex_desc_vlan" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_vlan_mask }, + [IAVF_FLEX_DESC_IPV4] = { + .param = { .name = "iavf_dynflag_flex_desc_ipv4" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv4_mask }, + [IAVF_FLEX_DESC_IPV6] = { + .param = { .name = "iavf_dynflag_flex_desc_ipv6" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_mask }, + [IAVF_FLEX_DESC_IPV6_FLOW] = { + .param = { .name = "iavf_dynflag_flex_desc_ipv6_flow" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask }, + [IAVF_FLEX_DESC_TCP] = { + .param = { .name = "iavf_dynflag_flex_desc_tcp" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_tcp_mask }, + [IAVF_FLEX_DESC_OVS] = { + .param = { .name = "iavf_dynflag_flex_desc_ovs" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_ovs_mask }, + [IAVF_FLEX_DESC_IP_OFFSET] = { + .param = { .name = "ice_dynflag_flex_desc_ip_offset" }, + .ol_flag = &rte_net_iavf_dynflag_flex_desc_ip_offset_mask }, +}; static int iavf_dev_configure(struct rte_eth_dev *dev); static int iavf_dev_start(struct rte_eth_dev *dev); @@ -1211,6 +1257,350 @@ iavf_check_vf_reset_done(struct iavf_hw *hw) return 0; } +static int +iavf_lookup_flex_desc_type(const char *xtr_name) +{ + static struct { + const char *name; + enum iavf_flex_desc_type type; + } xtr_type_map[] = { + { "vlan", IAVF_FLEX_DESC_VLAN }, + { "ipv4", IAVF_FLEX_DESC_IPV4 }, + { "ipv6", IAVF_FLEX_DESC_IPV6 }, + { "ipv6_flow", IAVF_FLEX_DESC_IPV6_FLOW }, + { "tcp", IAVF_FLEX_DESC_TCP }, + { "ovs", IAVF_FLEX_DESC_OVS }, + { "ip_offset", IAVF_FLEX_DESC_IP_OFFSET }, + }; + uint32_t i; + + for (i = 0; i < RTE_DIM(xtr_type_map); i++) { + if (strcmp(xtr_name, xtr_type_map[i].name) == 0) + return xtr_type_map[i].type; + } + + PMD_DRV_LOG(ERR, "wrong flex_desc type, " + "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ovs|ip_offset"); + + return -1; +} + +/** + * Parse elem, the elem could be single number/range or '(' ')' group + * 1) A single number elem, it's just a simple digit. e.g. 9 + * 2) A single range elem, two digits with a '-' between. e.g. 2-6 + * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6) + * Within group elem, '-' used for a range separator; + * ',' used for a single number. + */ +static int +iavf_parse_queue_set(const char *input, int xtr_type, + struct iavf_devargs *devargs) +{ + const char *str = input; + char *end = NULL; + uint32_t min, max; + uint32_t idx; + + while (isblank(*str)) + str++; + + if (!isdigit(*str) && *str != '(') + return -1; + + /* process single number or single range of number */ + if (*str != '(') { + errno = 0; + idx = strtoul(str, &end, 10); + if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM) + return -1; + + while (isblank(*end)) + end++; + + min = idx; + max = idx; + + /* process single - */ + if (*end == '-') { + end++; + while (isblank(*end)) + end++; + if (!isdigit(*end)) + return -1; + + errno = 0; + idx = strtoul(end, &end, 10); + if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM) + return -1; + + max = idx; + while (isblank(*end)) + end++; + } + + if (*end != ':') + return -1; + + for (idx = RTE_MIN(min, max); + idx <= RTE_MAX(min, max); idx++) + devargs->flex_desc[idx] = xtr_type; + + return 0; + } + + /* process set within bracket */ + str++; + while (isblank(*str)) + str++; + if (*str == '\0') + return -1; + + min = IAVF_MAX_QUEUE_NUM; + do { + /* go ahead to the first digit */ + while (isblank(*str)) + str++; + if (!isdigit(*str)) + return -1; + + /* get the digit value */ + errno = 0; + idx = strtoul(str, &end, 10); + if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM) + return -1; + + /* go ahead to separator '-',',' and ')' */ + while (isblank(*end)) + end++; + if (*end == '-') { + if (min == IAVF_MAX_QUEUE_NUM) + min = idx; + else /* avoid continuous '-' */ + return -1; + } else if (*end == ',' || *end == ')') { + max = idx; + if (min == IAVF_MAX_QUEUE_NUM) + min = idx; + + for (idx = RTE_MIN(min, max); + idx <= RTE_MAX(min, max); idx++) + devargs->flex_desc[idx] = xtr_type; + + min = IAVF_MAX_QUEUE_NUM; + } else { + return -1; + } + + str = end + 1; + } while (*end != ')' && *end != '\0'); + + return 0; +} + +static int +iavf_parse_queue_flex_desc(const char *queues, struct iavf_devargs *devargs) +{ + const char *queue_start; + uint32_t idx; + int xtr_type; + char xtr_name[32]; + + while (isblank(*queues)) + queues++; + + if (*queues != '[') { + xtr_type = iavf_lookup_flex_desc_type(queues); + if (xtr_type < 0) + return -1; + + devargs->flex_desc_dflt = xtr_type; + + return 0; + } + + queues++; + do { + while (isblank(*queues)) + queues++; + if (*queues == '\0') + return -1; + + queue_start = queues; + + /* go across a complete bracket */ + if (*queue_start == '(') { + queues += strcspn(queues, ")"); + if (*queues != ')') + return -1; + } + + /* scan the separator ':' */ + queues += strcspn(queues, ":"); + if (*queues++ != ':') + return -1; + while (isblank(*queues)) + queues++; + + for (idx = 0; ; idx++) { + if (isblank(queues[idx]) || + queues[idx] == ',' || + queues[idx] == ']' || + queues[idx] == '\0') + break; + + if (idx > sizeof(xtr_name) - 2) + return -1; + + xtr_name[idx] = queues[idx]; + } + xtr_name[idx] = '\0'; + xtr_type = iavf_lookup_flex_desc_type(xtr_name); + if (xtr_type < 0) + return -1; + + queues += idx; + + while (isblank(*queues) || *queues == ',' || *queues == ']') + queues++; + + if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0) + return -1; + } while (*queues != '\0'); + + return 0; +} + +static int +iavf_handle_flex_desc_arg(__rte_unused const char *key, const char *value, + void *extra_args) +{ + struct iavf_devargs *devargs = extra_args; + + if (!value || !extra_args) + return -EINVAL; + + if (iavf_parse_queue_flex_desc(value, devargs) < 0) { + PMD_DRV_LOG(ERR, "the flex_desc's parameter is wrong : '%s'", + value); + return -1; + } + + return 0; +} + +static int iavf_parse_devargs(struct rte_eth_dev *dev) +{ + struct iavf_adapter *ad = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct rte_devargs *devargs = dev->device->devargs; + struct rte_kvargs *kvlist; + int ret; + + if (!devargs) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args); + if (!kvlist) { + PMD_INIT_LOG(ERR, "invalid kvargs key\n"); + return -EINVAL; + } + + ad->devargs.flex_desc_dflt = IAVF_FLEX_DESC_NONE; + memset(ad->devargs.flex_desc, IAVF_FLEX_DESC_NONE, + sizeof(ad->devargs.flex_desc)); + + ret = rte_kvargs_process(kvlist, IAVF_FLEX_DESC_ARG, + &iavf_handle_flex_desc_arg, &ad->devargs); + if (ret) + goto bail; + +bail: + rte_kvargs_free(kvlist); + return ret; +} + +static void +iavf_init_flex_desc(struct rte_eth_dev *dev) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_adapter *ad = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + const struct iavf_flex_desc_ol_flag *ol_flag; + bool flex_desc_enable = false; + int offset; + uint16_t i; + + vf->flex_desc = rte_zmalloc("vf flex desc", + vf->vsi_res->num_queue_pairs, 0); + if (unlikely(!(vf->flex_desc))) { + PMD_DRV_LOG(ERR, "no memory for setting up flex_desc's table"); + return; + } + + for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) { + vf->flex_desc[i] = ad->devargs.flex_desc[i] != + IAVF_FLEX_DESC_NONE ? + ad->devargs.flex_desc[i] : + ad->devargs.flex_desc_dflt; + + if (vf->flex_desc[i] != IAVF_FLEX_DESC_NONE) { + uint8_t type = vf->flex_desc[i]; + + iavf_flex_desc_ol_flag_params[type].required = true; + flex_desc_enable = true; + } + } + + if (likely(!flex_desc_enable)) + return; + + offset = rte_mbuf_dynfield_register(&iavf_flex_desc_metadata_param); + if (unlikely(offset == -1)) { + PMD_DRV_LOG(ERR, + "failed to extract flex_desc metadata, error %d", + -rte_errno); + return; + } + + PMD_DRV_LOG(DEBUG, + "flex_desc extraction metadata offset in mbuf is : %d", + offset); + rte_net_iavf_dynfield_flex_desc_metadata_offs = offset; + + for (i = 0; i < RTE_DIM(iavf_flex_desc_ol_flag_params); i++) { + ol_flag = &iavf_flex_desc_ol_flag_params[i]; + + uint8_t rxdid = iavf_flex_desc_type_to_rxdid((uint8_t)i); + + if (!ol_flag->required) + continue; + + if (!(vf->supported_rxdid & BIT(rxdid))) { + PMD_DRV_LOG(ERR, + "rxdid[%u] is not supported in hardware", + rxdid); + rte_net_iavf_dynfield_flex_desc_metadata_offs = -1; + break; + } + + offset = rte_mbuf_dynflag_register(&ol_flag->param); + if (unlikely(offset == -1)) { + PMD_DRV_LOG(ERR, + "failed to register offload '%s', error %d", + ol_flag->param.name, -rte_errno); + + rte_net_iavf_dynfield_flex_desc_metadata_offs = -1; + break; + } + + PMD_DRV_LOG(DEBUG, + "flex_desc extraction offload '%s' offset in mbuf is : %d", + ol_flag->param.name, offset); + *ol_flag->ol_flag = 1ULL << offset; + } +} + static int iavf_init_vf(struct rte_eth_dev *dev) { @@ -1220,6 +1610,12 @@ iavf_init_vf(struct rte_eth_dev *dev) struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + err = iavf_parse_devargs(dev); + if (err) { + PMD_INIT_LOG(ERR, "Failed to parse devargs"); + goto err; + } + err = iavf_set_mac_type(hw); if (err) { PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err); @@ -1283,6 +1679,8 @@ iavf_init_vf(struct rte_eth_dev *dev) } } + iavf_init_flex_desc(dev); + return 0; err_rss: rte_free(vf->rss_key); diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 05a7dd898..fa71b4a80 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -26,6 +26,74 @@ #include "iavf.h" #include "iavf_rxtx.h" +#include "rte_pmd_iavf.h" + +/* Offset of mbuf dynamic field for flexible descriptor's extraction data */ +int rte_net_iavf_dynfield_flex_desc_metadata_offs = -1; + +/* Mask of mbuf dynamic flags for flexible descriptor's type */ +uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_ovs_mask; +uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask; + +static inline uint64_t +iavf_rxdid_to_flex_desc_ol_flag(uint8_t rxdid, bool *chk_valid) +{ + static struct { + uint64_t *ol_flag; + bool chk_valid; + } ol_flag_map[] = { + [IAVF_RXDID_COMMS_AUX_VLAN] = { + &rte_net_iavf_dynflag_flex_desc_vlan_mask, true }, + [IAVF_RXDID_COMMS_AUX_IPV4] = { + &rte_net_iavf_dynflag_flex_desc_ipv4_mask, true }, + [IAVF_RXDID_COMMS_AUX_IPV6] = { + &rte_net_iavf_dynflag_flex_desc_ipv6_mask, true }, + [IAVF_RXDID_COMMS_AUX_IPV6_FLOW] = { + &rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask, true }, + [IAVF_RXDID_COMMS_AUX_TCP] = { + &rte_net_iavf_dynflag_flex_desc_tcp_mask, true }, + [IAVF_RXDID_COMMS_OVS_1] = { + &rte_net_iavf_dynflag_flex_desc_ovs_mask, true }, + [IAVF_RXDID_COMMS_AUX_IP_OFFSET] = { + &rte_net_iavf_dynflag_flex_desc_ip_offset_mask, false }, + }; + uint64_t *ol_flag; + + if (rxdid < RTE_DIM(ol_flag_map)) { + ol_flag = ol_flag_map[rxdid].ol_flag; + if (!ol_flag) + return 0ULL; + + *chk_valid = ol_flag_map[rxdid].chk_valid; + return *ol_flag; + } + + return 0ULL; +} + + +uint8_t +iavf_flex_desc_type_to_rxdid(uint8_t xtr_type) +{ + static uint8_t rxdid_map[] = { + [IAVF_FLEX_DESC_NONE] = IAVF_RXDID_COMMS_GENERIC, + [IAVF_FLEX_DESC_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN, + [IAVF_FLEX_DESC_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4, + [IAVF_FLEX_DESC_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6, + [IAVF_FLEX_DESC_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW, + [IAVF_FLEX_DESC_TCP] = IAVF_RXDID_COMMS_AUX_TCP, + [IAVF_FLEX_DESC_OVS] = IAVF_RXDID_COMMS_OVS_1, + [IAVF_FLEX_DESC_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET, + }; + + return xtr_type < RTE_DIM(rxdid_map) ? + rxdid_map[xtr_type] : IAVF_RXDID_COMMS_GENERIC; +} static inline int check_rx_thresh(uint16_t nb_desc, uint16_t thresh) @@ -309,6 +377,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, struct iavf_rx_queue *rxq; const struct rte_memzone *mz; uint32_t ring_size; + uint8_t flex_desc; uint16_t len; uint16_t rx_free_thresh; @@ -346,10 +415,10 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return -ENOMEM; } - if (vf->vf_res->vf_cap_flags & - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && - vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) { - rxq->rxdid = IAVF_RXDID_COMMS_OVS_1; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) { + flex_desc = vf->flex_desc ? vf->flex_desc[queue_idx] : + IAVF_FLEX_DESC_NONE; + rxq->rxdid = iavf_flex_desc_type_to_rxdid(flex_desc); } else { rxq->rxdid = IAVF_RXDID_LEGACY_1; } @@ -715,6 +784,45 @@ iavf_stop_queues(struct rte_eth_dev *dev) } } +#define IAVF_RX_FLEX_ERR0_BITS \ + ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S)) + +/* Rx L3/L4 checksum */ +static inline uint64_t +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0) +{ + uint64_t flags = 0; + + /* check if HW has decoded the packet and checksum */ + if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S)))) + return 0; + + if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) { + flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD); + return flags; + } + + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S))) + flags |= PKT_RX_IP_CKSUM_BAD; + else + flags |= PKT_RX_IP_CKSUM_GOOD; + + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S))) + flags |= PKT_RX_L4_CKSUM_BAD; + else + flags |= PKT_RX_L4_CKSUM_GOOD; + + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S))) + flags |= PKT_RX_EIP_CKSUM_BAD; + + return flags; +} + static inline void iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp) { @@ -740,6 +848,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb, } else { mb->vlan_tci = 0; } + +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC + if (rte_le_to_cpu_16(rxdp->wb.status_error1) & + (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) { + mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ | + PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN; + mb->vlan_tci_outer = mb->vlan_tci; + mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd); + PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u", + rte_le_to_cpu_16(rxdp->wb.l2tag2_1st), + rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd)); + } else { + mb->vlan_tci_outer = 0; + } +#endif } /* Translate the rx descriptor status and error fields to pkt flags */ @@ -804,14 +927,54 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb) return flags; } +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC +#define IAVF_RX_FLEX_DESC_VALID \ + ((1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S) | \ + (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S)) + +static void +iavf_rxd_to_flex_desc(struct rte_mbuf *mb, + volatile struct iavf_32b_rx_flex_desc_comms *desc) +{ + uint16_t stat_err = rte_le_to_cpu_16(desc->status_error1); + uint32_t metadata = 0; + uint64_t ol_flag; + bool chk_valid; + + ol_flag = iavf_rxdid_to_flex_desc_ol_flag(desc->rxdid, &chk_valid); + if (unlikely(!ol_flag)) + return; + + if (chk_valid) { + if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S)) + metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0); + + if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S)) + metadata |= + rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16; + } else { + if (rte_le_to_cpu_16(desc->flex_ts.flex.aux0) != 0xFFFF) + metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0); + else if (rte_le_to_cpu_16(desc->flex_ts.flex.aux1) != 0xFFFF) + metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1); + } + + if (!metadata) + return; + + mb->ol_flags |= ol_flag; + + *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata; +} +#endif /* Translate the rx flex descriptor status to pkt flags */ static inline void -iavf_rxd_to_pkt_fields(struct rte_mbuf *mb, - volatile union iavf_rx_flex_desc *rxdp) +iavf_rxd_to_pkt_fields_ovs(struct rte_mbuf *mb, + volatile union iavf_rx_flex_desc *rxdp) { volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc = - (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp; + (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp; #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC uint16_t stat_err; @@ -828,6 +991,50 @@ iavf_rxd_to_pkt_fields(struct rte_mbuf *mb, } } +/* Translate the rx flex descriptor status to pkt flags */ +static inline void +iavf_rxd_to_pkt_fields_aux(struct rte_mbuf *mb, + volatile union iavf_rx_flex_desc *rxdp) +{ + volatile struct iavf_32b_rx_flex_desc_comms *desc = + (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp; + + uint16_t stat_err; + + stat_err = rte_le_to_cpu_16(desc->status_error0); + if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) { + mb->ol_flags |= PKT_RX_RSS_HASH; + mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash); + } + +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC + if (desc->flow_id != 0xFFFFFFFF) { + mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID; + mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id); + } + + if (unlikely(rte_net_iavf_dynf_flex_desc_metadata_avail())) + iavf_rxd_to_flex_desc(mb, desc); +#endif +} + +/* Translate the rx flex descriptor status to pkt flags */ +static inline void +iavf_rxd_to_pkt_fields(struct rte_mbuf *mb, + volatile union iavf_rx_flex_desc *rxdp, uint8_t rxdid) +{ + if (rxdid == IAVF_RXDID_COMMS_GENERIC || + rxdid == IAVF_RXDID_COMMS_AUX_VLAN || + rxdid == IAVF_RXDID_COMMS_AUX_IPV4 || + rxdid == IAVF_RXDID_COMMS_AUX_IPV6 || + rxdid == IAVF_RXDID_COMMS_AUX_IPV6_FLOW || + rxdid == IAVF_RXDID_COMMS_AUX_TCP || + rxdid == IAVF_RXDID_COMMS_AUX_IP_OFFSET) + iavf_rxd_to_pkt_fields_aux(mb, rxdp); + else if (rxdid == IAVF_RXDID_COMMS_OVS_1) + iavf_rxd_to_pkt_fields_ovs(mb, rxdp); +} + #define IAVF_RX_FLEX_ERR0_BITS \ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \ @@ -1082,7 +1289,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; iavf_flex_rxd_to_vlan_tci(rxm, &rxd); - iavf_rxd_to_pkt_fields(rxm, &rxd); + iavf_rxd_to_pkt_fields(rxm, &rxd, rxq->rxdid); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); rxm->ol_flags |= pkt_flags; @@ -1223,7 +1430,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; iavf_flex_rxd_to_vlan_tci(first_seg, &rxd); - iavf_rxd_to_pkt_fields(first_seg, &rxd); + iavf_rxd_to_pkt_fields(first_seg, &rxd, rxq->rxdid); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); first_seg->ol_flags |= pkt_flags; @@ -1460,7 +1667,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq) mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)]; iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]); - iavf_rxd_to_pkt_fields(mb, &rxdp[j]); + iavf_rxd_to_pkt_fields(mb, &rxdp[j], rxq->rxdid); stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0); @@ -1652,7 +1859,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (rxq->rx_nb_avail) return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); - if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1) + if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST) nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq); else nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq); @@ -2100,6 +2307,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + #ifdef RTE_ARCH_X86 struct iavf_rx_queue *rxq; int i; diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 59625a979..55eb7f0f2 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -331,6 +331,7 @@ enum iavf_rxdid { IAVF_RXDID_COMMS_AUX_TCP = 21, IAVF_RXDID_COMMS_OVS_1 = 22, IAVF_RXDID_COMMS_OVS_2 = 23, + IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25, IAVF_RXDID_LAST = 63, }; @@ -355,6 +356,20 @@ enum iavf_rx_flex_desc_status_error_0_bits { IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */ }; +enum iavf_rx_flex_desc_status_error_1_bits { + /* Note: These are predefined bit offsets */ + IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */ + IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4, + IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5, + /* [10:6] reserved */ + IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11, + IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12, + IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13, + IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14, + IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15, + IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ +}; + /* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */ #define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */ @@ -438,6 +453,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq); int iavf_txq_vec_setup(struct iavf_tx_queue *txq); +uint8_t iavf_flex_desc_type_to_rxdid(uint8_t xtr_type); + const uint32_t *iavf_get_default_ptype_table(void); static inline diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 6b57ecbba..1ec4e0fef 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -642,25 +642,27 @@ iavf_configure_queues(struct iavf_adapter *adapter) #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC if (vf->vf_res->vf_cap_flags & - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && - vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) { - vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && + vf->supported_rxdid & BIT(rxq[i]->rxdid)) { + vc_qp->rxq.rxdid = rxq[i]->rxdid; + PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]", + vc_qp->rxq.rxdid, i); } else { + PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, " + "request default RXDID[%d] in Queue[%d]", + rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i); vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); } #else if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) { vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); + PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]", + vc_qp->rxq.rxdid, i); } else { - PMD_DRV_LOG(ERR, "RXDID == 0 is not supported"); + PMD_DRV_LOG(ERR, "RXDID[%d] is not supported", + IAVF_RXDID_LEGACY_0); return -1; } #endif diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build index a3fad363d..cd5159332 100644 --- a/drivers/net/iavf/meson.build +++ b/drivers/net/iavf/meson.build @@ -35,3 +35,5 @@ if arch_subdir == 'x86' objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c') endif endif + +install_headers('rte_pmd_iavf.h') diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h new file mode 100644 index 000000000..858201bd7 --- /dev/null +++ b/drivers/net/iavf/rte_pmd_iavf.h @@ -0,0 +1,258 @@ +/* SPDX-Liavfnse-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#ifndef _RTE_PMD_IAVF_H_ +#define _RTE_PMD_IAVF_H_ + +/** + * @file rte_pmd_iavf.h + * + * iavf PMD specific functions. + * + * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf + * + */ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * The supported network flexible descriptor's extraction metadata format. + */ +union rte_net_iavf_flex_desc_metadata { + uint32_t metadata; + + struct { + uint16_t data0; + uint16_t data1; + } raw; + + struct { + uint16_t stag_vid:12, + stag_dei:1, + stag_pcp:3; + uint16_t ctag_vid:12, + ctag_dei:1, + ctag_pcp:3; + } vlan; + + struct { + uint16_t protocol:8, + ttl:8; + uint16_t tos:8, + ihl:4, + version:4; + } ipv4; + + struct { + uint16_t hoplimit:8, + nexthdr:8; + uint16_t flowhi4:4, + tc:8, + version:4; + } ipv6; + + struct { + uint16_t flowlo16; + uint16_t flowhi4:4, + tc:8, + version:4; + } ipv6_flow; + + struct { + uint16_t fin:1, + syn:1, + rst:1, + psh:1, + ack:1, + urg:1, + ece:1, + cwr:1, + res1:4, + doff:4; + uint16_t rsvd; + } tcp; + + uint32_t ip_ofs; +}; + +/* Offset of mbuf dynamic field for flexible descriptor's extraction data */ +extern int rte_net_iavf_dynfield_flex_desc_metadata_offs; + +/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */ +extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_ovs_mask; +extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask; + +/** + * The mbuf dynamic field pointer for flexible descriptor's extraction metadata. + */ +#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \ + RTE_MBUF_DYNFIELD((m), \ + rte_net_iavf_dynfield_flex_desc_metadata_offs, \ + uint32_t *) + +/** + * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid + * when dev_args 'flex_desc' has 'vlan' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \ + (rte_net_iavf_dynflag_flex_desc_vlan_mask) + +/** + * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid + * when dev_args 'flex_desc' has 'ipv4' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \ + (rte_net_iavf_dynflag_flex_desc_ipv4_mask) + +/** + * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid + * when dev_args 'flex_desc' has 'ipv6' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \ + (rte_net_iavf_dynflag_flex_desc_ipv6_mask) + +/** + * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is + * valid when dev_args 'flex_desc' has 'ipv6_flow' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \ + (rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask) + +/** + * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid + * when dev_args 'flex_desc' has 'tcp' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \ + (rte_net_iavf_dynflag_flex_desc_tcp_mask) + +/** + * The mbuf dynamic flag for the extraction metadata of OVS flexible + * descriptor, it is valid when dev_args 'flex_desc' has 'ovs' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_OVS \ + (rte_net_iavf_dynflag_flex_desc_ovs_mask) + +/** + * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid + * when dev_args 'flex_desc' has 'ip_offset' specified. + */ +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \ + (rte_net_iavf_dynflag_flex_desc_ip_offset_mask) + +/** + * Check if mbuf dynamic field for flexible descriptor's extraction metadata + * is registered. + * + * @return + * True if registered, false otherwise. + */ +__rte_experimental +static __rte_always_inline int +rte_net_iavf_dynf_flex_desc_metadata_avail(void) +{ + return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1; +} + +/** + * Get the mbuf dynamic field for flexible descriptor's extraction metadata. + * + * @param m + * The pointer to the mbuf. + * @return + * The saved protocol extraction metadata. + */ +__rte_experimental +static __rte_always_inline uint32_t +rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m) +{ + return *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m); +} + +/** + * Dump the mbuf dynamic field for flexible descriptor's extraction metadata. + * + * @param m + * The pointer to the mbuf. + */ +__rte_experimental +static inline void +rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m) +{ + union rte_net_iavf_flex_desc_metadata data; + + if (!rte_net_iavf_dynf_flex_desc_metadata_avail()) + return; + + data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m); + + if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN) + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x]," + "vlan,stag=%u:%u:%u,ctag=%u:%u:%u", + data.raw.data0, data.raw.data1, + data.vlan.stag_pcp, + data.vlan.stag_dei, + data.vlan.stag_vid, + data.vlan.ctag_pcp, + data.vlan.ctag_dei, + data.vlan.ctag_vid); + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4) + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x]," + "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u", + data.raw.data0, data.raw.data1, + data.ipv4.version, + data.ipv4.ihl, + data.ipv4.tos, + data.ipv4.ttl, + data.ipv4.protocol); + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6) + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x]," + "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u", + data.raw.data0, data.raw.data1, + data.ipv6.version, + data.ipv6.tc, + data.ipv6.flowhi4, + data.ipv6.nexthdr, + data.ipv6.hoplimit); + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW) + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x]," + "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x", + data.raw.data0, data.raw.data1, + data.ipv6_flow.version, + data.ipv6_flow.tc, + data.ipv6_flow.flowhi4, + data.ipv6_flow.flowlo16); + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP) + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x]," + "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s", + data.raw.data0, data.raw.data1, + data.tcp.doff, + data.tcp.cwr ? "C" : "", + data.tcp.ece ? "E" : "", + data.tcp.urg ? "U" : "", + data.tcp.ack ? "A" : "", + data.tcp.psh ? "P" : "", + data.tcp.rst ? "R" : "", + data.tcp.syn ? "S" : "", + data.tcp.fin ? "F" : ""); + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET) + printf(" - Flexible descriptor's Extraction: ip_offset=%u", + data.ip_ofs); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_PMD_IAVF_H_ */ -- 2.20.1