From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com [209.85.217.178]) by dpdk.org (Postfix) with ESMTP id 03C49C3A2 for ; Wed, 22 Jun 2016 11:06:19 +0200 (CEST) Received: by mail-lb0-f178.google.com with SMTP id oe3so18516795lbb.1 for ; Wed, 22 Jun 2016 02:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zMIsH8xHO9rAPXVOLhOHzmcpqnaBn3fRNcVoclmPhNg=; b=FxHHLlNU3wuN5AaOr0GUU1SfLVnX8Njir8PZCbWs+ejCFP9TYnZTVLuGGllovIAzAM CzzK7lbPxIufgtTe1hmGvtSJrX+dKsfvMydCsK41Nbh9P2iT+JRb8cXPQ2y2cP3C6ap4 ez9GixtmvgiLSIH42u5EIq3SNSa0t2ooNcgCfUbYos/twu0yb+ki/5gNmkgIiE3yHbRE QXuV0FxiY9EZs95miMmwZGlI7MBqx/UqXgleCnhcwOcFoEoDdKqx2I6ZyKPYWqtrg6Dg WdO58XCKHCP69cOdWQyf0upfsJ9NmzM5pZW2ZnxEHf3bXeRZsXR1b7EQzCufLnpvaoI3 tAgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zMIsH8xHO9rAPXVOLhOHzmcpqnaBn3fRNcVoclmPhNg=; b=SSdXKYr4l+wsYq56MA2m2B3BvnejnI2j2mmEpO//wzBeOrvmgWjfMRTTtDtKXTdCXO Vzl9v4CHVW3CKZzx78brmwbvu3DH8VPVkFBoBdX1Pjt+6YBu4hex0iy9y+Ey6u7AdafG 1NEISVL3NGtpXqOMg1IQHTyyoz5wujVCq14tiDO+osqi2dvPtNrOaMcR/K9gC5lAvi4x btCcnM7IF7745uPk3axQIwsjD2iNEDxGb+JHJcoW3OPa+vGCmmVmtBhIe5rB62IMd2c3 t07vF77Ts70zhKZ2ozkJFhX8r1vTMWrzd2PSDqiePRqS7NqU47dTnq1iXREWZZ5/Kbl1 pysQ== X-Gm-Message-State: ALyK8tLmyA5VpMbONyXh5Uuj957Ki7hRaPD60h7jv2ylpArsRJtIJ1jHYT7BIEL4fjFq/bMS X-Received: by 10.194.242.163 with SMTP id wr3mr23629946wjc.1.1466586378345; Wed, 22 Jun 2016 02:06:18 -0700 (PDT) Received: from ping.vm.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id z5sm7019178wme.5.2016.06.22.02.06.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 22 Jun 2016 02:06:17 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Ferruh Yigit , Adrien Mazarguil Date: Wed, 22 Jun 2016 11:05:39 +0200 Message-Id: <1466586355-30777-10-git-send-email-nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1466586355-30777-1-git-send-email-nelio.laranjeiro@6wind.com> References: <1466493818-1877-1-git-send-email-nelio.laranjeiro@6wind.com> <1466586355-30777-1-git-send-email-nelio.laranjeiro@6wind.com> Subject: [dpdk-dev] [PATCH v4 09/25] mlx5: update prerequisites for upcoming enhancements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Jun 2016 09:06:19 -0000 The latest version of Mellanox OFED exposes hardware definitions necessary to implement data path operation bypassing Verbs. Update the minimum version requirement to MLNX_OFED >= 3.3 and clean up compatibility checks for previous releases. Signed-off-by: Nelio Laranjeiro Signed-off-by: Adrien Mazarguil --- doc/guides/nics/mlx5.rst | 44 +++--------------------------------------- drivers/net/mlx5/Makefile | 39 ++++++++----------------------------- drivers/net/mlx5/mlx5.c | 23 ---------------------- drivers/net/mlx5/mlx5.h | 5 +++++ drivers/net/mlx5/mlx5_defs.h | 9 --------- drivers/net/mlx5/mlx5_fdir.c | 10 ---------- drivers/net/mlx5/mlx5_rxmode.c | 8 -------- drivers/net/mlx5/mlx5_rxq.c | 30 ---------------------------- drivers/net/mlx5/mlx5_rxtx.c | 4 ---- drivers/net/mlx5/mlx5_rxtx.h | 8 -------- drivers/net/mlx5/mlx5_txq.c | 2 -- drivers/net/mlx5/mlx5_vlan.c | 3 --- 12 files changed, 16 insertions(+), 169 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 77fa957..3a07928 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -125,16 +125,6 @@ These options can be modified in the ``.config`` file. Environment variables ~~~~~~~~~~~~~~~~~~~~~ -- ``MLX5_ENABLE_CQE_COMPRESSION`` - - A nonzero value lets ConnectX-4 return smaller completion entries to - improve performance when PCI backpressure is detected. It is most useful - for scenarios involving heavy traffic on many queues. - - Since the additional software logic necessary to handle this mode can - lower performance when there is no backpressure, it is not enabled by - default. - - ``MLX5_PMD_ENABLE_PADDING`` Enables HW packet padding in PCI bus transactions. @@ -211,40 +201,12 @@ DPDK and must be installed separately: Currently supported by DPDK: -- Mellanox OFED **3.1-1.0.3**, **3.1-1.5.7.1** or **3.2-2.0.0.0** depending - on usage. - - The following features are supported with version **3.1-1.5.7.1** and - above only: - - - IPv6, UPDv6, TCPv6 RSS. - - RX checksum offloads. - - IBM POWER8. - - The following features are supported with version **3.2-2.0.0.0** and - above only: - - - Flow director. - - RX VLAN stripping. - - TX VLAN insertion. - - RX CRC stripping configuration. +- Mellanox OFED **3.3-1.0.0.0**. - Minimum firmware version: - With MLNX_OFED **3.1-1.0.3**: - - - ConnectX-4: **12.12.1240** - - ConnectX-4 Lx: **14.12.1100** - - With MLNX_OFED **3.1-1.5.7.1**: - - - ConnectX-4: **12.13.0144** - - ConnectX-4 Lx: **14.13.0144** - - With MLNX_OFED **3.2-2.0.0.0**: - - - ConnectX-4: **12.14.2036** - - ConnectX-4 Lx: **14.14.2036** + - ConnectX-4: **12.16.1006** + - ConnectX-4 Lx: **14.16.1006** Getting Mellanox OFED ~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile index 289c85e..dc99797 100644 --- a/drivers/net/mlx5/Makefile +++ b/drivers/net/mlx5/Makefile @@ -106,42 +106,19 @@ mlx5_autoconf.h.new: FORCE mlx5_autoconf.h.new: $(RTE_SDK)/scripts/auto-config-h.sh $Q $(RM) -f -- '$@' $Q sh -- '$<' '$@' \ - HAVE_EXP_QUERY_DEVICE \ - infiniband/verbs.h \ - type 'struct ibv_exp_device_attr' $(AUTOCONF_OUTPUT) - $Q sh -- '$<' '$@' \ - HAVE_FLOW_SPEC_IPV6 \ - infiniband/verbs.h \ - type 'struct ibv_exp_flow_spec_ipv6' $(AUTOCONF_OUTPUT) - $Q sh -- '$<' '$@' \ - HAVE_EXP_QP_BURST_CREATE_ENABLE_MULTI_PACKET_SEND_WR \ - infiniband/verbs.h \ - enum IBV_EXP_QP_BURST_CREATE_ENABLE_MULTI_PACKET_SEND_WR \ - $(AUTOCONF_OUTPUT) - $Q sh -- '$<' '$@' \ - HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS \ - infiniband/verbs.h \ - enum IBV_EXP_DEVICE_ATTR_VLAN_OFFLOADS \ - $(AUTOCONF_OUTPUT) - $Q sh -- '$<' '$@' \ - HAVE_EXP_CQ_RX_TCP_PACKET \ + HAVE_VERBS_VLAN_INSERTION \ infiniband/verbs.h \ - enum IBV_EXP_CQ_RX_TCP_PACKET \ + enum IBV_EXP_RECEIVE_WQ_CVLAN_INSERTION \ $(AUTOCONF_OUTPUT) $Q sh -- '$<' '$@' \ - HAVE_VERBS_FCS \ - infiniband/verbs.h \ - enum IBV_EXP_CREATE_WQ_FLAG_SCATTER_FCS \ + HAVE_VERBS_IBV_EXP_CQ_COMPRESSED_CQE \ + infiniband/verbs_exp.h \ + enum IBV_EXP_CQ_COMPRESSED_CQE \ $(AUTOCONF_OUTPUT) $Q sh -- '$<' '$@' \ - HAVE_VERBS_RX_END_PADDING \ - infiniband/verbs.h \ - enum IBV_EXP_CREATE_WQ_FLAG_RX_END_PADDING \ - $(AUTOCONF_OUTPUT) - $Q sh -- '$<' '$@' \ - HAVE_VERBS_VLAN_INSERTION \ - infiniband/verbs.h \ - enum IBV_EXP_RECEIVE_WQ_CVLAN_INSERTION \ + HAVE_VERBS_MLX5_ETH_VLAN_INLINE_HEADER_SIZE \ + infiniband/mlx5_hw.h \ + enum MLX5_ETH_VLAN_INLINE_HEADER_SIZE \ $(AUTOCONF_OUTPUT) # Create mlx5_autoconf.h or update it in case it differs from the new one. diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 27a7a30..3f45d84 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -195,17 +195,13 @@ static const struct eth_dev_ops mlx5_dev_ops = { .mac_addr_add = mlx5_mac_addr_add, .mac_addr_set = mlx5_mac_addr_set, .mtu_set = mlx5_dev_set_mtu, -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS .vlan_strip_queue_set = mlx5_vlan_strip_queue_set, .vlan_offload_set = mlx5_vlan_offload_set, -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ .reta_update = mlx5_dev_rss_reta_update, .reta_query = mlx5_dev_rss_reta_query, .rss_hash_update = mlx5_rss_hash_update, .rss_hash_conf_get = mlx5_rss_hash_conf_get, -#ifdef MLX5_FDIR_SUPPORT .filter_ctrl = mlx5_dev_filter_ctrl, -#endif /* MLX5_FDIR_SUPPORT */ }; static struct { @@ -352,24 +348,16 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) struct ibv_pd *pd = NULL; struct priv *priv = NULL; struct rte_eth_dev *eth_dev; -#ifdef HAVE_EXP_QUERY_DEVICE struct ibv_exp_device_attr exp_device_attr; -#endif /* HAVE_EXP_QUERY_DEVICE */ struct ether_addr mac; uint16_t num_vfs = 0; -#ifdef HAVE_EXP_QUERY_DEVICE exp_device_attr.comp_mask = IBV_EXP_DEVICE_ATTR_EXP_CAP_FLAGS | IBV_EXP_DEVICE_ATTR_RX_HASH | -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS IBV_EXP_DEVICE_ATTR_VLAN_OFFLOADS | -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ -#ifdef HAVE_VERBS_RX_END_PADDING IBV_EXP_DEVICE_ATTR_RX_PAD_END_ALIGN | -#endif /* HAVE_VERBS_RX_END_PADDING */ 0; -#endif /* HAVE_EXP_QUERY_DEVICE */ DEBUG("using port %u (%08" PRIx32 ")", port, test); @@ -420,7 +408,6 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) priv->port = port; priv->pd = pd; priv->mtu = ETHER_MTU; -#ifdef HAVE_EXP_QUERY_DEVICE if (ibv_exp_query_device(ctx, &exp_device_attr)) { ERROR("ibv_exp_query_device() failed"); goto port_error; @@ -446,30 +433,20 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) priv->ind_table_max_size = RSS_INDIRECTION_TABLE_SIZE; DEBUG("maximum RX indirection table size is %u", priv->ind_table_max_size); -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS priv->hw_vlan_strip = !!(exp_device_attr.wq_vlan_offloads_cap & IBV_EXP_RECEIVE_WQ_CVLAN_STRIP); -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ DEBUG("VLAN stripping is %ssupported", (priv->hw_vlan_strip ? "" : "not ")); -#ifdef HAVE_VERBS_FCS priv->hw_fcs_strip = !!(exp_device_attr.exp_device_cap_flags & IBV_EXP_DEVICE_SCATTER_FCS); -#endif /* HAVE_VERBS_FCS */ DEBUG("FCS stripping configuration is %ssupported", (priv->hw_fcs_strip ? "" : "not ")); -#ifdef HAVE_VERBS_RX_END_PADDING priv->hw_padding = !!exp_device_attr.rx_pad_end_addr_align; -#endif /* HAVE_VERBS_RX_END_PADDING */ DEBUG("hardware RX end alignment padding is %ssupported", (priv->hw_padding ? "" : "not ")); -#else /* HAVE_EXP_QUERY_DEVICE */ - priv->ind_table_max_size = RSS_INDIRECTION_TABLE_SIZE; -#endif /* HAVE_EXP_QUERY_DEVICE */ - priv_get_num_vfs(priv, &num_vfs); priv->sriov = (num_vfs || sriov); priv->mps = mps; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index cbcb8b9..935e1b0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -68,6 +68,11 @@ #include "mlx5_autoconf.h" #include "mlx5_defs.h" +#if !defined(HAVE_VERBS_IBV_EXP_CQ_COMPRESSED_CQE) || \ + !defined(HAVE_VERBS_MLX5_ETH_VLAN_INLINE_HEADER_SIZE) +#error Mellanox OFED >= 3.3 is required, please refer to the documentation. +#endif + enum { PCI_VENDOR_ID_MELLANOX = 0x15b3, }; diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 9a19835..8d2ec7a 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -76,13 +76,4 @@ /* Alarm timeout. */ #define MLX5_ALARM_TIMEOUT_US 100000 -/* - * Extended flow priorities necessary to support flow director are available - * since MLNX_OFED 3.2. Considering this version adds support for VLAN - * offloads as well, their availability means flow director can be used. - */ -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS -#define MLX5_FDIR_SUPPORT 1 -#endif - #endif /* RTE_PMD_MLX5_DEFS_H_ */ diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c index e3b97ba..1850218 100644 --- a/drivers/net/mlx5/mlx5_fdir.c +++ b/drivers/net/mlx5/mlx5_fdir.c @@ -122,7 +122,6 @@ fdir_filter_to_flow_desc(const struct rte_eth_fdir_filter *fdir_filter, case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER: desc->type = HASH_RXQ_IPV4; break; -#ifdef HAVE_FLOW_SPEC_IPV6 case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: desc->type = HASH_RXQ_UDPV6; break; @@ -132,7 +131,6 @@ fdir_filter_to_flow_desc(const struct rte_eth_fdir_filter *fdir_filter, case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER: desc->type = HASH_RXQ_IPV6; break; -#endif /* HAVE_FLOW_SPEC_IPV6 */ default: break; } @@ -147,7 +145,6 @@ fdir_filter_to_flow_desc(const struct rte_eth_fdir_filter *fdir_filter, desc->src_ip[0] = fdir_filter->input.flow.ip4_flow.src_ip; desc->dst_ip[0] = fdir_filter->input.flow.ip4_flow.dst_ip; break; -#ifdef HAVE_FLOW_SPEC_IPV6 case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: case RTE_ETH_FLOW_NONFRAG_IPV6_TCP: desc->src_port = fdir_filter->input.flow.udp6_flow.src_port; @@ -161,7 +158,6 @@ fdir_filter_to_flow_desc(const struct rte_eth_fdir_filter *fdir_filter, fdir_filter->input.flow.ipv6_flow.dst_ip, sizeof(desc->dst_ip)); break; -#endif /* HAVE_FLOW_SPEC_IPV6 */ default: break; } @@ -211,7 +207,6 @@ priv_fdir_overlap(const struct priv *priv, (desc2->dst_ip[0] & mask->ipv4_mask.dst_ip))) return 0; break; -#ifdef HAVE_FLOW_SPEC_IPV6 case HASH_RXQ_IPV6: case HASH_RXQ_UDPV6: case HASH_RXQ_TCPV6: @@ -222,7 +217,6 @@ priv_fdir_overlap(const struct priv *priv, (desc2->dst_ip[i] & mask->ipv6_mask.dst_ip[i]))) return 0; break; -#endif /* HAVE_FLOW_SPEC_IPV6 */ default: break; } @@ -258,9 +252,7 @@ priv_fdir_flow_add(struct priv *priv, uintptr_t spec_offset = (uintptr_t)&data->spec; struct ibv_exp_flow_spec_eth *spec_eth; struct ibv_exp_flow_spec_ipv4 *spec_ipv4; -#ifdef HAVE_FLOW_SPEC_IPV6 struct ibv_exp_flow_spec_ipv6 *spec_ipv6; -#endif /* HAVE_FLOW_SPEC_IPV6 */ struct ibv_exp_flow_spec_tcp_udp *spec_tcp_udp; struct mlx5_fdir_filter *iter_fdir_filter; unsigned int i; @@ -334,7 +326,6 @@ priv_fdir_flow_add(struct priv *priv, spec_offset += spec_ipv4->size; break; -#ifdef HAVE_FLOW_SPEC_IPV6 case HASH_RXQ_IPV6: case HASH_RXQ_UDPV6: case HASH_RXQ_TCPV6: @@ -368,7 +359,6 @@ priv_fdir_flow_add(struct priv *priv, spec_offset += spec_ipv6->size; break; -#endif /* HAVE_FLOW_SPEC_IPV6 */ default: ERROR("invalid flow attribute type"); return EINVAL; diff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c index 3a55f63..51e2aca 100644 --- a/drivers/net/mlx5/mlx5_rxmode.c +++ b/drivers/net/mlx5/mlx5_rxmode.c @@ -67,11 +67,9 @@ static const struct special_flow_init special_flow_init[] = { 1 << HASH_RXQ_TCPV4 | 1 << HASH_RXQ_UDPV4 | 1 << HASH_RXQ_IPV4 | -#ifdef HAVE_FLOW_SPEC_IPV6 1 << HASH_RXQ_TCPV6 | 1 << HASH_RXQ_UDPV6 | 1 << HASH_RXQ_IPV6 | -#endif /* HAVE_FLOW_SPEC_IPV6 */ 1 << HASH_RXQ_ETH | 0, .per_vlan = 0, @@ -82,10 +80,8 @@ static const struct special_flow_init special_flow_init[] = { .hash_types = 1 << HASH_RXQ_UDPV4 | 1 << HASH_RXQ_IPV4 | -#ifdef HAVE_FLOW_SPEC_IPV6 1 << HASH_RXQ_UDPV6 | 1 << HASH_RXQ_IPV6 | -#endif /* HAVE_FLOW_SPEC_IPV6 */ 1 << HASH_RXQ_ETH | 0, .per_vlan = 0, @@ -96,15 +92,12 @@ static const struct special_flow_init special_flow_init[] = { .hash_types = 1 << HASH_RXQ_UDPV4 | 1 << HASH_RXQ_IPV4 | -#ifdef HAVE_FLOW_SPEC_IPV6 1 << HASH_RXQ_UDPV6 | 1 << HASH_RXQ_IPV6 | -#endif /* HAVE_FLOW_SPEC_IPV6 */ 1 << HASH_RXQ_ETH | 0, .per_vlan = 1, }, -#ifdef HAVE_FLOW_SPEC_IPV6 [HASH_RXQ_FLOW_TYPE_IPV6MULTI] = { .dst_mac_val = "\x33\x33\x00\x00\x00\x00", .dst_mac_mask = "\xff\xff\x00\x00\x00\x00", @@ -115,7 +108,6 @@ static const struct special_flow_init special_flow_init[] = { 0, .per_vlan = 1, }, -#endif /* HAVE_FLOW_SPEC_IPV6 */ }; /** diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8d32e74..7db4ce7 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -105,7 +105,6 @@ const struct hash_rxq_init hash_rxq_init[] = { }, .underlayer = &hash_rxq_init[HASH_RXQ_ETH], }, -#ifdef HAVE_FLOW_SPEC_IPV6 [HASH_RXQ_TCPV6] = { .hash_fields = (IBV_EXP_RX_HASH_SRC_IPV6 | IBV_EXP_RX_HASH_DST_IPV6 | @@ -144,7 +143,6 @@ const struct hash_rxq_init hash_rxq_init[] = { }, .underlayer = &hash_rxq_init[HASH_RXQ_ETH], }, -#endif /* HAVE_FLOW_SPEC_IPV6 */ [HASH_RXQ_ETH] = { .hash_fields = 0, .dpdk_rss_hf = 0, @@ -168,17 +166,11 @@ static const struct ind_table_init ind_table_init[] = { 1 << HASH_RXQ_TCPV4 | 1 << HASH_RXQ_UDPV4 | 1 << HASH_RXQ_IPV4 | -#ifdef HAVE_FLOW_SPEC_IPV6 1 << HASH_RXQ_TCPV6 | 1 << HASH_RXQ_UDPV6 | 1 << HASH_RXQ_IPV6 | -#endif /* HAVE_FLOW_SPEC_IPV6 */ 0, -#ifdef HAVE_FLOW_SPEC_IPV6 .hash_types_n = 6, -#else /* HAVE_FLOW_SPEC_IPV6 */ - .hash_types_n = 3, -#endif /* HAVE_FLOW_SPEC_IPV6 */ }, { .max_size = 1, @@ -243,12 +235,8 @@ priv_flow_attr(struct priv *priv, struct ibv_exp_flow_attr *flow_attr, init = &hash_rxq_init[type]; *flow_attr = (struct ibv_exp_flow_attr){ .type = IBV_EXP_FLOW_ATTR_NORMAL, -#ifdef MLX5_FDIR_SUPPORT /* Priorities < 3 are reserved for flow director. */ .priority = init->flow_priority + 3, -#else /* MLX5_FDIR_SUPPORT */ - .priority = init->flow_priority, -#endif /* MLX5_FDIR_SUPPORT */ .num_of_specs = 0, .port = priv->port, .flags = 0, @@ -589,9 +577,7 @@ priv_allow_flow_type(struct priv *priv, enum hash_rxq_flow_type type) case HASH_RXQ_FLOW_TYPE_ALLMULTI: return !!priv->allmulti_req; case HASH_RXQ_FLOW_TYPE_BROADCAST: -#ifdef HAVE_FLOW_SPEC_IPV6 case HASH_RXQ_FLOW_TYPE_IPV6MULTI: -#endif /* HAVE_FLOW_SPEC_IPV6 */ /* If allmulti is enabled, broadcast and ipv6multi * are unnecessary. */ return !priv->allmulti_req; @@ -1038,19 +1024,13 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, uint16_t desc, .cq = tmpl.rxq.cq, .comp_mask = IBV_EXP_CREATE_WQ_RES_DOMAIN | -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS IBV_EXP_CREATE_WQ_VLAN_OFFLOADS | -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ 0, .res_domain = tmpl.rd, -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS .vlan_offloads = (tmpl.rxq.vlan_strip ? IBV_EXP_RECEIVE_WQ_CVLAN_STRIP : 0), -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ }; - -#ifdef HAVE_VERBS_FCS /* By default, FCS (CRC) is stripped by hardware. */ if (dev->data->dev_conf.rxmode.hw_strip_crc) { tmpl.rxq.crc_present = 0; @@ -1071,9 +1051,6 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, uint16_t desc, (void *)dev, tmpl.rxq.crc_present ? "disabled" : "enabled", tmpl.rxq.crc_present << 2); -#endif /* HAVE_VERBS_FCS */ - -#ifdef HAVE_VERBS_RX_END_PADDING if (!mlx5_getenv_int("MLX5_PMD_ENABLE_PADDING")) ; /* Nothing else to do. */ else if (priv->hw_padding) { @@ -1086,7 +1063,6 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, uint16_t desc, " supported, make sure MLNX_OFED and firmware are" " up to date", (void *)dev); -#endif /* HAVE_VERBS_RX_END_PADDING */ tmpl.rxq.wq = ibv_exp_create_wq(priv->ctx, &attr.wq); if (tmpl.rxq.wq == NULL) { @@ -1106,9 +1082,7 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, uint16_t desc, DEBUG("%p: RTE port ID: %u", (void *)rxq_ctrl, tmpl.rxq.port_id); attr.params = (struct ibv_exp_query_intf_params){ .intf_scope = IBV_EXP_INTF_GLOBAL, -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS .intf_version = 1, -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ .intf = IBV_EXP_INTF_CQ, .obj = tmpl.rxq.cq, }; @@ -1164,11 +1138,7 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, uint16_t desc, DEBUG("%p: rxq updated with %p", (void *)rxq_ctrl, (void *)&tmpl); assert(ret == 0); /* Assign function in queue. */ -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS rxq_ctrl->rxq.poll = rxq_ctrl->if_cq->poll_length_flags_cvlan; -#else /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ - rxq_ctrl->rxq.poll = rxq_ctrl->if_cq->poll_length_flags; -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ rxq_ctrl->rxq.recv = rxq_ctrl->if_wq->recv_burst; return 0; error: diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index f0b42e9..6a0d707 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -452,11 +452,9 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags) TRANSPOSE(~flags, IBV_EXP_CQ_RX_IP_CSUM_OK, PKT_RX_IP_CKSUM_BAD); -#ifdef HAVE_EXP_CQ_RX_TCP_PACKET /* Set L4 checksum flag only for TCP/UDP packets. */ if (flags & (IBV_EXP_CQ_RX_TCP_PACKET | IBV_EXP_CQ_RX_UDP_PACKET)) -#endif /* HAVE_EXP_CQ_RX_TCP_PACKET */ ol_flags |= TRANSPOSE(~flags, IBV_EXP_CQ_RX_TCP_UDP_CSUM_OK, @@ -589,12 +587,10 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) if (rxq->csum | rxq->csum_l2tun | rxq->vlan_strip) { seg->packet_type = rxq_cq_to_pkt_type(flags); seg->ol_flags = rxq_cq_to_ol_flags(rxq, flags); -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS if (flags & IBV_EXP_CQ_RX_CVLAN_STRIPPED_V1) { seg->ol_flags |= PKT_RX_VLAN_PKT; seg->vlan_tci = vlan_tci; } -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ } /* Return packet. */ *(pkts++) = seg; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 2c5e447..570345b 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -120,11 +120,7 @@ struct rxq_ctrl { struct fdir_queue fdir_queue; /* Flow director queue. */ struct ibv_mr *mr; /* Memory Region (for mp). */ struct ibv_exp_wq_family *if_wq; /* WQ burst interface. */ -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS struct ibv_exp_cq_family_v1 *if_cq; /* CQ interface. */ -#else /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ - struct ibv_exp_cq_family *if_cq; /* CQ interface. */ -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ unsigned int socket; /* CPU socket ID for allocations. */ struct rxq rxq; /* Data path structure. */ }; @@ -134,11 +130,9 @@ enum hash_rxq_type { HASH_RXQ_TCPV4, HASH_RXQ_UDPV4, HASH_RXQ_IPV4, -#ifdef HAVE_FLOW_SPEC_IPV6 HASH_RXQ_TCPV6, HASH_RXQ_UDPV6, HASH_RXQ_IPV6, -#endif /* HAVE_FLOW_SPEC_IPV6 */ HASH_RXQ_ETH, }; @@ -169,9 +163,7 @@ struct hash_rxq_init { } hdr; struct ibv_exp_flow_spec_tcp_udp tcp_udp; struct ibv_exp_flow_spec_ipv4 ipv4; -#ifdef HAVE_FLOW_SPEC_IPV6 struct ibv_exp_flow_spec_ipv6 ipv6; -#endif /* HAVE_FLOW_SPEC_IPV6 */ struct ibv_exp_flow_spec_eth eth; } flow_spec; /* Flow specification template. */ const struct hash_rxq_init *underlayer; /* Pointer to underlayer. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 4683775..9f3a33b 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -375,13 +375,11 @@ txq_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl, uint16_t desc, #ifdef HAVE_VERBS_VLAN_INSERTION .intf_version = 1, #endif -#ifdef HAVE_EXP_QP_BURST_CREATE_ENABLE_MULTI_PACKET_SEND_WR /* Enable multi-packet send if supported. */ .family_flags = (priv->mps ? IBV_EXP_QP_BURST_CREATE_ENABLE_MULTI_PACKET_SEND_WR : 0), -#endif }; tmpl.if_qp = ibv_exp_query_intf(priv->ctx, &attr.params, &status); if (tmpl.if_qp == NULL) { diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index ff40538..3b9b771 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -144,7 +144,6 @@ static void priv_vlan_strip_queue_set(struct priv *priv, uint16_t idx, int on) { struct rxq *rxq = (*priv->rxqs)[idx]; -#ifdef HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS struct ibv_exp_wq_attr mod; uint16_t vlan_offloads = (on ? IBV_EXP_RECEIVE_WQ_CVLAN_STRIP : 0) | @@ -165,8 +164,6 @@ priv_vlan_strip_queue_set(struct priv *priv, uint16_t idx, int on) return; } -#endif /* HAVE_EXP_DEVICE_ATTR_VLAN_OFFLOADS */ - /* Update related bits in RX queue. */ rxq->vlan_strip = !!on; } -- 2.1.4