DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: yskoh@mellanox.com, nelio.laranjeiro@6wind.com,
	adrien.mazarguil@6wind.com
Cc: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v2] net/mlx5: add bluefield device ID
Date: Tue, 15 May 2018 09:12:50 +0300	[thread overview]
Message-ID: <20180515061250.372-1-shahafs@mellanox.com> (raw)
In-Reply-To: <20180225072049.85144-1-shahafs@mellanox.com>

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---

On v2:
 - Updated mlx5 docs to include Bluefield product.
 - PCI_DEVICE_ID_MELLANOX_BLUEFIELD -> PCI_DEVICE_ID_MELLANOX_CONNECTX5BF

---
 config/common_base       |  3 ++-
 doc/guides/nics/mlx5.rst | 58 ++++++++++++++++++++++++++----------------------
 drivers/net/mlx5/mlx5.c  |  4 ++++
 drivers/net/mlx5/mlx5.h  |  1 +
 4 files changed, 38 insertions(+), 28 deletions(-)

diff --git a/config/common_base b/config/common_base
index c4dba709d1..6b0d1cbbb7 100644
--- a/config/common_base
+++ b/config/common_base
@@ -295,7 +295,8 @@ CONFIG_RTE_LIBRTE_MLX4_DEBUG=n
 CONFIG_RTE_LIBRTE_MLX4_DLOPEN_DEPS=n
 
 #
-# Compile burst-oriented Mellanox ConnectX-4 & ConnectX-5 (MLX5) PMD
+# Compile burst-oriented Mellanox ConnectX-4, ConnectX-5 & Bluefield
+# (MLX5) PMD
 #
 CONFIG_RTE_LIBRTE_MLX5_PMD=n
 CONFIG_RTE_LIBRTE_MLX5_DEBUG=n
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a7d5c90bcf..f4a127b8fd 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -6,9 +6,9 @@ MLX5 poll mode driver
 =====================
 
 The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
-for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** and **Mellanox
-ConnectX-5** families of 10/25/40/50/100 Gb/s adapters as well as their
-virtual functions (VF) in SR-IOV context.
+for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
+ConnectX-5** and **Mellanox Bluefield** families of 10/25/40/50/100 Gb/s
+adapters as well as their virtual functions (VF) in SR-IOV context.
 
 Information and documentation about these adapters can be found on the
 `Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
@@ -237,8 +237,8 @@ Run-time configuration
 
   Supported on:
 
-  - x86_64 with ConnectX-4, ConnectX-4 LX and ConnectX-5.
-  - POWER8 and ARMv8 with ConnectX-4 LX and ConnectX-5.
+  - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5 and Bluefield.
+  - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5 and Bluefield.
 
 - ``mprq_en`` parameter [int]
 
@@ -304,34 +304,35 @@ Run-time configuration
 
   This option should be used in combination with ``txq_inline`` above.
 
-  On ConnectX-4, ConnectX-4 LX and ConnectX-5 without Enhanced MPW:
+  On ConnectX-4, ConnectX-4 LX, ConnectX-5 and Bluefield without
+  Enhanced MPW:
 
         - Disabled by default.
         - In case ``txq_inline`` is set recommendation is 4.
 
-  On ConnectX-5 with Enhanced MPW:
+  On ConnectX-5 and Bluefield with Enhanced MPW:
 
         - Set to 8 by default.
 
 - ``txq_mpw_en`` parameter [int]
 
   A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and
-  enhanced multi-packet send (Enhanced MPS) for ConnectX-5. MPS allows the
-  TX burst function to pack up multiple packets in a single descriptor
-  session in order to save PCI bandwidth and improve performance at the
-  cost of a slightly higher CPU usage. When ``txq_inline`` is set along
-  with ``txq_mpw_en``, TX burst function tries to copy entire packet data
-  on to TX descriptor instead of including pointer of packet only if there
-  is enough room remained in the descriptor. ``txq_inline`` sets
-  per-descriptor space for either pointers or inlined packets. In addition,
-  Enhanced MPS supports hybrid mode - mixing inlined packets and pointers
-  in the same descriptor.
+  enhanced multi-packet send (Enhanced MPS) for ConnectX-5 and Bluefiled.
+  MPS allows the TX burst function to pack up multiple packets in a
+  single descriptor session in order to save PCI bandwidth and improve
+  performance at the cost of a slightly higher CPU usage. When
+  ``txq_inline`` is set along with ``txq_mpw_en``, TX burst function tries
+  to copy entire packet data on to TX descriptor instead of including
+  pointer of packet only if there is enough room remained in the
+  descriptor. ``txq_inline`` sets per-descriptor space for either pointers
+  or inlined packets. In addition, Enhanced MPS supports hybrid mode -
+  mixing inlined packets and pointers in the same descriptor.
 
   This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.
 
-  It is currently only supported on the ConnectX-4 Lx and ConnectX-5
+  It is currently only supported on the ConnectX-4 Lx, ConnectX-5 and Bluefield
   families of adapters. Enabled by default.
 
 - ``txq_mpw_hdr_dseg_en`` parameter [int]
@@ -352,14 +353,14 @@ Run-time configuration
 
 - ``tx_vec_en`` parameter [int]
 
-  A nonzero value enables Tx vector on ConnectX-5 only NIC if the number of
+  A nonzero value enables Tx vector on ConnectX-5 and Bluefield NICs if the number of
   global Tx queues on the port is lesser than MLX5_VPMD_MIN_TXQS.
 
   This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO,
   DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``.
   When those offloads are requested the MPS send function will not be used.
 
-  Enabled by default on ConnectX-5.
+  Enabled by default on ConnectX-5 and Bluefield.
 
 - ``rx_vec_en`` parameter [int]
 
@@ -422,8 +423,9 @@ DPDK and must be installed separately:
 
 - **libmlx5**
 
-  Low-level user space driver library for Mellanox ConnectX-4/ConnectX-5
-  devices, it is automatically loaded by libibverbs.
+  Low-level user space driver library for Mellanox
+  ConnectX-4/ConnectX-5/Bluefield devices, it is automatically loaded
+  by libibverbs.
 
   This library basically implements send/receive calls to the hardware
   queues.
@@ -437,15 +439,16 @@ DPDK and must be installed separately:
   Unlike most other PMDs, these modules must remain loaded and bound to
   their devices:
 
-  - mlx5_core: hardware driver managing Mellanox ConnectX-4/ConnectX-5
-    devices and related Ethernet kernel network devices.
+  - mlx5_core: hardware driver managing Mellanox
+    ConnectX-4/ConnectX-5/Bluefield devices and related Ethernet kernel
+    network devices.
   - mlx5_ib: InifiniBand device driver.
   - ib_uverbs: user space driver for Verbs (entry point for libibverbs).
 
 - **Firmware update**
 
-  Mellanox OFED releases include firmware updates for ConnectX-4/ConnectX-5
-  adapters.
+  Mellanox OFED releases include firmware updates for
+  ConnectX-4/ConnectX-5/Bluefield adapters.
 
   Because each release provides new features, these updates must be applied to
   match the kernel modules and libraries they come with.
@@ -482,6 +485,7 @@ Mellanox OFED
   - ConnectX-4 Lx: **14.21.1000** and above.
   - ConnectX-5: **16.21.1000** and above.
   - ConnectX-5 Ex: **16.21.1000** and above.
+  - Bluefield: **18.23.1000** and above.
 
 While these libraries and kernel modules are available on OpenFabrics
 Alliance's `website <https://www.openfabrics.org/>`__ and provided by package
@@ -699,7 +703,7 @@ Usage example
 -------------
 
 This section demonstrates how to launch **testpmd** with Mellanox
-ConnectX-4/ConnectX-5 devices managed by librte_pmd_mlx5.
+ConnectX-4/ConnectX-5/Bluefield devices managed by librte_pmd_mlx5.
 
 #. Load the kernel modules:
 
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 8aa91cc8ed..0ce45eb852 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1248,6 +1248,10 @@ static const struct rte_pci_id mlx5_pci_id_map[] = {
 			       PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF)
 	},
 	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
+			       PCI_DEVICE_ID_MELLANOX_CONNECTX5BF)
+	},
+	{
 		.vendor_id = 0
 	}
 };
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c4c962b92d..a9c692555e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -50,6 +50,7 @@ enum {
 	PCI_DEVICE_ID_MELLANOX_CONNECTX5VF = 0x1018,
 	PCI_DEVICE_ID_MELLANOX_CONNECTX5EX = 0x1019,
 	PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF = 0x101a,
+	PCI_DEVICE_ID_MELLANOX_CONNECTX5BF = 0xa2d2,
 };
 
 LIST_HEAD(mlx5_dev_list, priv);
-- 
2.12.0

  parent reply	other threads:[~2018-05-15  6:28 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-25  7:20 [dpdk-dev] [PATCH] " Shahaf Shuler
2018-02-26  8:11 ` Nélio Laranjeiro
2018-05-15  6:12 ` Shahaf Shuler [this message]
2018-05-15  9:28   ` [dpdk-dev] [PATCH v2] " Nélio Laranjeiro
2018-05-15 10:00     ` Shahaf Shuler
2018-05-17 10:48     ` Ferruh Yigit
2018-05-17 12:36       ` Shahaf Shuler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180515061250.372-1-shahafs@mellanox.com \
    --to=shahafs@mellanox.com \
    --cc=adrien.mazarguil@6wind.com \
    --cc=dev@dpdk.org \
    --cc=nelio.laranjeiro@6wind.com \
    --cc=yskoh@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).