From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <alucero@ubuntu.org>
Received: from ubuntu (host217-39-174-19.in-addr.btopenworld.com
 [217.39.174.19]) by dpdk.org (Postfix) with SMTP id 4926AC338
 for <dev@dpdk.org>; Thu, 26 Nov 2015 10:49:37 +0100 (CET)
Received: by ubuntu (Postfix, from userid 5466)
 id E5AC6EA794; Thu, 26 Nov 2015 09:49:29 +0000 (GMT)
From: Alejandro Lucero <alejandro.lucero@netronome.com>
To: dev@dpdk.org
Date: Thu, 26 Nov 2015 09:49:28 +0000
Message-Id: <1448531369-8808-9-git-send-email-alejandro.lucero@netronome.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1448531369-8808-1-git-send-email-alejandro.lucero@netronome.com>
References: <1448531369-8808-1-git-send-email-alejandro.lucero@netronome.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Subject: [dpdk-dev] =?utf-8?q?=5BPATCH_v9_8/9=5D_nfp=3A_adding_nic_guide?=
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 26 Nov 2015 09:49:38 -0000

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Signed-off-by: Rolf Neugebauer <rolf.neugebauer@netronome.com>
---
 doc/guides/nics/index.rst |    1 +
 doc/guides/nics/nfp.rst   |  265 +++++++++++++++++++++++++++++++++++++++=
++++++
 2 files changed, 266 insertions(+)
 create mode 100644 doc/guides/nics/nfp.rst

diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 0a0b724..7bf2938 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -46,6 +46,7 @@ Network Interface Controller Drivers
     intel_vf
     mlx4
     mlx5
+    nfp
     szedata2
     virtio
     vmxnet3
diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
new file mode 100644
index 0000000..55ba64d
--- /dev/null
+++ b/doc/guides/nics/nfp.rst
@@ -0,0 +1,265 @@
+..  BSD LICENSE
+    Copyright(c) 2015 Netronome Systems, Inc. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FO=
R
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL=
,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE=
,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON AN=
Y
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE US=
E
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+NFP poll mode driver library
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
+
+Netronome's sixth generation of flow processors pack 216 programmable
+cores and over 100 hardware accelerators that uniquely combine packet,
+flow, security and content processing in a single device that scales
+up to 400 Gbps.
+
+This document explains how to use DPDK with the Netronome Poll Mode
+Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
+(NFP-6xxx).
+
+Currently the driver supports virtual functions (VFs) only.
+
+Dependencies
+------------
+
+Before using the Netronome's DPDK PMD some NFP-6xxx configuration,
+which is not related to DPDK, is required. The system requires
+installation of **Netronome's BSP (Board Support Package)** which includ=
es
+Linux drivers, programs and libraries.
+
+If you have a NFP-6xxx device you should already have the code and
+documentation for doing this configuration. Contact
+**support@netronome.com** to obtain the latest available firmware.
+
+The NFP Linux kernel drivers (including the required PF driver for the
+NFP) are available on Github at
+**https://github.com/Netronome/nfp-drv-kmods** along with build
+instructions.
+
+DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to
+allow access to physical devices from userspace. The NFP PMD requires
+a separate UIO driver, **nfp_uio**, to perform correct
+initialization. This driver is part of Netronome=C2=B4s BSP and it is
+equivalent to Intel's igb_uio driver.
+
+Building the software
+---------------------
+
+Netronome's PMD code is provided in the **drivers/net/nfp** directory.
+Because Netronome=C2=B4s BSP dependencies the driver is disabled by defa=
ult
+in DPDK build using **common_linuxapp configuration** file. Enabling the
+driver or if you use another configuration file and want to have NFP
+support, this variable is needed:
+
+- **CONFIG_RTE_LIBRTE_NFP_PMD=3Dy**
+
+Once DPDK is built all the DPDK apps and examples include support for
+the NFP PMD.
+
+
+System configuration
+--------------------
+
+Using the NFP PMD is not different to using other PMDs. Usual steps are:
+
+#. **Configure hugepages:** All major Linux distributions have the hugep=
ages
+   functionality enabled by default. By default this allows the system u=
ses for
+   working with transparent hugepages. But in this case some hugepages n=
eed to
+   be created/reserved for use with the DPDK through the hugetlbfs file =
system.
+   First the virtual file system need to be mounted:
+
+   .. code-block:: console
+
+      mount -t hugetlbfs none /mnt/hugetlbfs
+
+   The command uses the common mount point for this file system and it n=
eeds to
+   be created if necessary.
+
+   Configuring hugepages is performed via sysfs:
+
+   .. code-block:: console
+
+      /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   This sysfs file is used to specify the number of hugepages to reserve=
.
+   For example:
+
+   .. code-block:: console
+
+      echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   This will reserve 2GB of memory using 1024 2MB hugepages. The file ma=
y be
+   read to see if the operation was performed correctly:
+
+   .. code-block:: console
+
+      cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+   The number of unused hugepages may also be inspected.
+
+   Before executing the DPDK app it should match the value of nr_hugepag=
es.
+
+   .. code-block:: console
+
+      cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
+
+   The hugepages reservation should be performed at system initialisatio=
n and
+   it is usual to use a kernel parameter for configuration. If the reser=
vation
+   is attempted on a busy system it will likely fail. Reserving memory f=
or
+   hugepages may be done adding the following to the grub kernel command=
 line:
+
+   .. code-block:: console
+
+      default_hugepagesz=3D1M hugepagesz=3D2M hugepages=3D1024
+
+   This will reserve 2GBytes of memory using 2Mbytes huge pages.
+
+   Finally, for a NUMA system the allocation needs to be made on the cor=
rect
+   NUMA node. In a DPDK app there is a master core which will (usually) =
perform
+   memory allocation. It is important that some of the hugepages are res=
erved
+   on the NUMA memory node where the network device is attached. This is=
 because
+   of a restriction in DPDK by which TX and RX descriptors rings must be=
 created
+   on the master code.
+
+   Per-node allocation of hugepages may be inspected and controlled usin=
g sysfs.
+   For example:
+
+   .. code-block:: console
+
+      cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_h=
ugepages
+
+   For a NUMA system there will be a specific hugepage directory per nod=
e
+   allowing control of hugepage reservation. A common problem may occur =
when
+   hugepages reservation is performed after the system has been working =
for
+   some time. Configuration using the global sysfs hugepage interface wi=
ll
+   succeed but the per-node allocations may be unsatisfactory.
+
+   The number of hugepages that need to be reserved depends on how the a=
pp uses
+   TX and RX descriptors, and packets mbufs.
+
+#. **Enable SR-IOV on the NFP-6xxx device:** The current NFP PMD works w=
ith
+   Virtual Functions (VFs) on a NFP device. Make sure that one of the Ph=
ysical
+   Function (PF) drivers from the above Github repository is installed a=
nd
+   loaded.
+
+   Virtual Functions need to be enabled before they can be used with the=
 PMD.
+   Before enabling the VFs it is useful to obtain information about the
+   current NFP PCI device detected by the system:
+
+   .. code-block:: console
+
+      lspci -d19ee:
+
+   Now, for example, configure two virtual functions on a NFP-6xxx devic=
e
+   whose PCI system identity is "0000:03:00.0":
+
+   .. code-block:: console
+
+      echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
+
+   The result of this command may be shown using lspci again:
+
+   .. code-block:: console
+
+      lspci -d19ee: -k
+
+   Two new PCI devices should appear in the output of the above command.=
 The
+   -k option shows the device driver, if any, that devices are bound to.
+   Depending on the modules loaded at this point the new PCI devices may=
 be
+   bound to nfp_netvf driver.
+
+#. **To install the uio kernel module (manually):** All major Linux
+   distributions have support for this kernel module so it is straightfo=
rward
+   to install it:
+
+   .. code-block:: console
+
+      modprobe uio
+
+   The module should now be listed by the lsmod command.
+
+#. **To install the nfp_uio kernel module (manually):** This module supp=
orts
+   NFP-6xxx devices through the UIO interface.
+
+   This module is part of Netronome=C2=B4s BSP and it should be availabl=
e when the
+   BSP is installed.
+
+   .. code-block:: console
+
+      modprobe nfp_uio.ko
+
+   The module should now be listed by the lsmod command.
+
+   Depending on which NFP modules are loaded, nfp_uio may be automatical=
ly
+   bound to the NFP PCI devices by the system. Otherwise the binding nee=
ds
+   to be done explicitly. This is the case when nfp_netvf, the Linux ker=
nel
+   driver for NFP VFs, was loaded when VFs were created. As described la=
ter
+   in this document this configuration may also be performed using scrip=
ts
+   provided by the Netronome=C2=B4s BSP.
+
+   First the device needs to be unbound, for example from the nfp_netvf
+   driver:
+
+   .. code-block:: console
+
+      echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbin=
d
+
+      lspci -d19ee: -k
+
+   The output of lspci should now show that 0000:03:08.0 is not bound to
+   any driver.
+
+   The next step is to add the NFP PCI ID to the NFP UIO driver:
+
+   .. code-block:: console
+
+      echo 19ee 6003 > /sys/bus/pci/drivers/nfp_uio/new_id
+
+   And then to bind the device to the nfp_uio driver:
+
+   .. code-block:: console
+
+      echo 0000:03:08.0 > /sys/bus/pci/drivers/nfp_uio/bind
+
+      lspci -d19ee: -k
+
+   lspci should show that device bound to nfp_uio driver.
+
+#. **Using tools from Netronome=C2=B4s BSP to install and bind modules:*=
* DPDK provides
+   scripts which are useful for installing the UIO modules and for bindi=
ng the
+   right device to those modules avoiding doing so manually. However, th=
ese scripts
+   have not support for Netronome=C2=B4s UIO driver. Along with drivers,=
 the BSP installs
+   those DPDK scripts slightly modified with support for Netronome=C2=B4=
s UIO driver.
+
+   Those specific scripts can be found in Netronome=C2=B4s BSP installat=
ion directory.
+   Refer to BSP documentation for more information.
+
+   * **setup.sh**
+   * **dpdk_nic_bind.py**
+
+   Configuration may be performed by running setup.sh which invokes
+   dpdk_nic_bind.py as needed. Executing setup.sh will display a menu of
+   configuration options.
--=20
1.7.9.5