From: Raslan Darawsheh <rasland@mellanox.com>
To: Slava Ovsiienko <viacheslavo@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Asaf Penso <asafp@mellanox.com>,
Wisam Monther <wisamm@mellanox.com>
Subject: [dpdk-dev] [PATCH v2] net/mlx5: add ConnectX6-DX device id
Date: Thu, 7 Nov 2019 09:25:06 +0000 [thread overview]
Message-ID: <1573118700-2542-1-git-send-email-rasland@mellanox.com> (raw)
In-Reply-To: <1573116011-1610-1-git-send-email-rasland@mellanox.com>
This adds new device id to the list of Mellanox devices
that runs mlx5 PMD.
- ConnectX-6DX device ID
- ConnectX-6DX SRIOV device ID
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
---
v2: add missing documentation and update pci id map
---
doc/guides/nics/mlx5.rst | 36 +++++++++++++++++++++-------------
doc/guides/rel_notes/release_19_11.rst | 1 +
drivers/net/mlx5/mlx5.c | 10 ++++++++++
drivers/net/mlx5/mlx5.h | 2 ++
4 files changed, 35 insertions(+), 14 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..0dec788 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -7,9 +7,9 @@ MLX5 poll mode driver
The MLX5 poll mode driver library (**librte_pmd_mlx5**) provides support
for **Mellanox ConnectX-4**, **Mellanox ConnectX-4 Lx** , **Mellanox
-ConnectX-5**, **Mellanox ConnectX-6** and **Mellanox BlueField** families
-of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF)
-in SR-IOV context.
+ConnectX-5**, **Mellanox ConnectX-6**, **Mellanox ConnectX-6DX** and
+**Mellanox BlueField** families of 10/25/40/50/100/200 Gb/s adapters
+as well as their virtual functions (VF) in SR-IOV context.
Information and documentation about these adapters can be found on the
`Mellanox website <http://www.mellanox.com>`__. Help is also provided by the
@@ -313,8 +313,10 @@ Run-time configuration
Supported on:
- - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
- - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6, ConnectX-6 DX
+ and BlueField.
+ - POWER9 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6, ConnectX-6 DX
+ and BlueField.
- ``rxq_cqe_pad_en`` parameter [int]
@@ -344,8 +346,10 @@ Run-time configuration
Supported on:
- - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
- - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField.
+ - x86_64 with ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6, ConnectX-6 DX
+ and BlueField.
+ - POWER8 and ARMv8 with ConnectX-4 LX, ConnectX-5, ConnectX-6, ConnectX-6 DX
+ and BlueField.
- ``mprq_en`` parameter [int]
@@ -537,11 +541,11 @@ Run-time configuration
- ``txq_mpw_en`` parameter [int]
A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
- ConnectX-6 and BlueField. eMPW allows the TX burst function to pack up multiple
- packets in a single descriptor session in order to save PCI bandwidth and improve
- performance at the cost of a slightly higher CPU usage. When ``txq_inline_mpw``
- is set along with ``txq_mpw_en``, TX burst function copies entire packet
- data on to TX descriptor instead of including pointer of packet.
+ ConnectX-6, ConnectX-6 DX and BlueField. eMPW allows the TX burst function to pack
+ up multiple packets in a single descriptor session in order to save PCI bandwidth
+ and improve performance at the cost of a slightly higher CPU usage. When
+ ``txq_inline_mpw`` is set along with ``txq_mpw_en``, TX burst function copies
+ entire packet data on to TX descriptor instead of including pointer of packet.
The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
@@ -550,8 +554,8 @@ Run-time configuration
- ``tx_vec_en`` parameter [int]
- A nonzero value enables Tx vector on ConnectX-5, ConnectX-6 and BlueField
- NICs if the number of global Tx queues on the port is less than
+ A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 DX
+ and BlueField NICs if the number of global Tx queues on the port is less than
``txqs_max_vec``. The parameter is deprecated and ignored.
- ``rx_vec_en`` parameter [int]
@@ -794,6 +798,7 @@ Mellanox OFED/EN
- ConnectX-5: **16.21.1000** and above.
- ConnectX-5 Ex: **16.21.1000** and above.
- ConnectX-6: **20.99.5374** and above.
+ - ConnectX-6 DX: **22.27.0090** and above.
- BlueField: **18.25.1010** and above.
While these libraries and kernel modules are available on OpenFabrics
@@ -837,6 +842,9 @@ Supported NICs
* Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
* Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
* Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
+* Mellanox(R) ConnectX(R)-6 200G MCX654106A-HCAT (4x200G)
+* Mellanox(R) ConnectX(R)-6DX EN 100G MCX623106AN-CDAT (2*100g)
+* Mellanox(R) ConnectX(R)-6DX EN 200G MCX623105AN-VDAT (1*200g)
Quick Start Guide on OFED/EN
----------------------------
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 23182d1..2920fa2 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -169,6 +169,7 @@ New Features
* Added support for VLAN set VID offload command.
* Added support for matching on packets withe Geneve tunnel header.
* Added hairpin support.
+ * Added ConnectX6-DX support.
* **Updated the AF_XDP PMD.**
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 72c30bf..594fac1 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2891,6 +2891,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
case PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF:
case PCI_DEVICE_ID_MELLANOX_CONNECTX5BFVF:
case PCI_DEVICE_ID_MELLANOX_CONNECTX6VF:
+ case PCI_DEVICE_ID_MELLANOX_CONNECTX6DX:
+ case PCI_DEVICE_ID_MELLANOX_CONNECTX6DXVF:
dev_config.vf = 1;
break;
default:
@@ -3061,6 +3063,14 @@ static const struct rte_pci_id mlx5_pci_id_map[] = {
PCI_DEVICE_ID_MELLANOX_CONNECTX6VF)
},
{
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
+ PCI_DEVICE_ID_MELLANOX_CONNECTX6DX)
+ },
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX,
+ PCI_DEVICE_ID_MELLANOX_CONNECTX6DXVF)
+ },
+ {
.vendor_id = 0
}
};
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..b56dae1 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -55,6 +55,8 @@ enum {
PCI_DEVICE_ID_MELLANOX_CONNECTX5BFVF = 0xa2d3,
PCI_DEVICE_ID_MELLANOX_CONNECTX6 = 0x101b,
PCI_DEVICE_ID_MELLANOX_CONNECTX6VF = 0x101c,
+ PCI_DEVICE_ID_MELLANOX_CONNECTX6DX = 0x101d,
+ PCI_DEVICE_ID_MELLANOX_CONNECTX6DXVF = 0x101e,
};
/* Request types for IPC. */
--
2.7.4
next prev parent reply other threads:[~2019-11-07 9:25 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-07 8:40 [dpdk-dev] [PATCH] " Raslan Darawsheh
2019-11-07 9:15 ` Wisam Monther
2019-11-07 9:25 ` Raslan Darawsheh
2019-11-07 9:25 ` Raslan Darawsheh [this message]
2019-11-07 9:36 ` [dpdk-dev] [PATCH v3] " Raslan Darawsheh
2019-11-07 11:27 ` Slava Ovsiienko
2019-11-07 12:39 ` Raslan Darawsheh
2019-11-07 13:40 ` Ferruh Yigit
2019-11-07 13:58 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1573118700-2542-1-git-send-email-rasland@mellanox.com \
--to=rasland@mellanox.com \
--cc=asafp@mellanox.com \
--cc=dev@dpdk.org \
--cc=viacheslavo@mellanox.com \
--cc=wisamm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).