DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter
@ 2017-07-26 17:17 Shahaf Shuler
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-26 17:17 UTC (permalink / raw)
  To: thomas; +Cc: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 17897 bytes --]

The UIO and VFIO sections should not be part of
the "Compiling the DPDK Target from Source" chapter,
as it is PMD specific and not true for all PMDs.

Instead, moving those sections to a new chapter
which include all kernel drivers being used along with
the different PMDs.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/linux_gsg/build_dpdk.rst              | 135 ----------------
 doc/guides/linux_gsg/index.rst                   |   1 +
 doc/guides/linux_gsg/linux_drivers.rst           | 197 +++++++++++++++++++++++
 doc/guides/linux_gsg/nic_perf_intel_platform.rst |  18 +--
 4 files changed, 200 insertions(+), 151 deletions(-)
 create mode 100644 doc/guides/linux_gsg/linux_drivers.rst

diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index cf6c06d64..e32afd5f8 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -143,138 +143,3 @@ Browsing the Installed DPDK Environment Target
 Once a target is created it contains all libraries, including poll-mode drivers, and header files for the DPDK environment that are required to build customer applications.
 In addition, the test and testpmd applications are built under the build/app directory, which may be used for testing.
 A kmod  directory is also present that contains kernel modules which may be loaded if needed.
-
-Loading Modules to Enable Userspace IO for DPDK
------------------------------------------------
-
-To run any DPDK application, a suitable uio module can be loaded into the running kernel.
-In many cases, the standard ``uio_pci_generic`` module included in the Linux kernel
-can provide the uio capability. This module can be loaded using the command
-
-.. code-block:: console
-
-    sudo modprobe uio_pci_generic
-
-.. note::
-
-    ``uio_pci_generic`` module doesn't support the creation of virtual functions.
-
-As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
-module which can be found in the kmod subdirectory referred to above. It can
-be loaded as shown below:
-
-.. code-block:: console
-
-    sudo modprobe uio
-    sudo insmod kmod/igb_uio.ko
-
-.. note::
-
-    For some devices which lack support for legacy interrupts, e.g. virtual function
-    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
-
-Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional
-for platforms that support using VFIO.
-
-Loading VFIO Module
--------------------
-
-To run an DPDK application and make use of VFIO, the ``vfio-pci`` module must be loaded:
-
-.. code-block:: console
-
-    sudo modprobe vfio-pci
-
-Note that in order to use VFIO, your kernel must support it.
-VFIO kernel modules have been included in the Linux kernel since version 3.6.0 and are usually present by default,
-however please consult your distributions documentation to make sure that is the case.
-
-Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
-
-.. note::
-
-    ``vfio-pci`` module doesn't support the creation of virtual functions.
-
-For proper operation of VFIO when running DPDK applications as a non-privileged user, correct permissions should also be set up.
-This can be done by using the DPDK setup script (called dpdk-setup.sh and located in the usertools directory).
-
-.. _linux_gsg_binding_kernel:
-
-Binding and Unbinding Network Ports to/from the Kernel Modules
---------------------------------------------------------------
-
-As of release 1.4, DPDK applications no longer automatically unbind all supported network ports from the kernel driver in use.
-Instead, all ports that are to be used by an DPDK application must be bound to the
-``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
-Any network ports under Linux* control will be ignored by the DPDK poll-mode drivers and cannot be used by the application.
-
-.. warning::
-
-    The DPDK will, by default, no longer automatically unbind network ports from the kernel driver at startup.
-    Any ports to be used by an DPDK application must be unbound from Linux* control and
-    bound to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
-
-To bind ports to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module for DPDK use,
-and then subsequently return ports to Linux* control,
-a utility script called dpdk-devbind.py is provided in the usertools subdirectory.
-This utility can be used to provide a view of the current state of the network ports on the system,
-and to bind and unbind those ports from the different kernel modules, including the uio and vfio modules.
-The following are some examples of how the script can be used.
-A full description of the script and its parameters can be obtained by calling the script with the ``--help`` or ``--usage`` options.
-Note that the uio or vfio kernel modules to be used, should be loaded into the kernel before
-running the ``dpdk-devbind.py`` script.
-
-.. warning::
-
-    Due to the way VFIO works, there are certain limitations to which devices can be used with VFIO.
-    Mainly it comes down to how IOMMU groups work.
-    Any Virtual Function device can be used with VFIO on its own, but physical devices will require either all ports bound to VFIO,
-    or some of them bound to VFIO while others not being bound to anything at all.
-
-    If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU group in which your device is in.
-    Therefore, the bridge driver should also be unbound from the bridge PCI device for VFIO to work with devices behind the bridge.
-
-.. warning::
-
-    While any user can run the dpdk-devbind.py script to view the status of the network ports,
-    binding or unbinding network ports requires root privileges.
-
-To see the status of all network ports on the system:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --status
-
-    Network devices using DPDK-compatible driver
-    ============================================
-    0000:82:00.0 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
-    0000:82:00.1 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
-
-    Network devices using kernel driver
-    ===================================
-    0000:04:00.0 'I350 1-GbE NIC' if=em0  drv=igb unused=uio_pci_generic *Active*
-    0000:04:00.1 'I350 1-GbE NIC' if=eth1 drv=igb unused=uio_pci_generic
-    0000:04:00.2 'I350 1-GbE NIC' if=eth2 drv=igb unused=uio_pci_generic
-    0000:04:00.3 'I350 1-GbE NIC' if=eth3 drv=igb unused=uio_pci_generic
-
-    Other network devices
-    =====================
-    <none>
-
-To bind device ``eth1``,``04:00.1``, to the ``uio_pci_generic`` driver:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 04:00.1
-
-or, alternatively,
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1
-
-To restore device ``82:00.0`` to its original kernel binding:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=ixgbe 82:00.0
diff --git a/doc/guides/linux_gsg/index.rst b/doc/guides/linux_gsg/index.rst
index 3d3ada15e..799559c22 100644
--- a/doc/guides/linux_gsg/index.rst
+++ b/doc/guides/linux_gsg/index.rst
@@ -40,6 +40,7 @@ Getting Started Guide for Linux
     intro
     sys_reqs
     build_dpdk
+    linux_drivers
     build_sample_apps
     enable_func
     quick_start
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
new file mode 100644
index 000000000..b3525c0b2
--- /dev/null
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -0,0 +1,197 @@
+..  BSD LICENSE
+    Copyright(c) 2010-2015 Intel Corporation.
+    Copyright(c) 2017 Mellanox Corporation.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+.. _linux_gsg_linux_drivers:
+
+Linux Drivers
+=============
+
+Different PMDs may require different kernel drivers in order to work properly.
+Depends on the PMD being used, a corresponding kernel driver should be load
+and bind to the network ports.
+
+UIO
+---
+
+A small kernel module to set up the device, map device memory to user-space and register interrupts.
+In many cases, the standard ``uio_pci_generic`` module included in the Linux kernel
+can provide the uio capability. This module can be loaded using the command:
+
+.. code-block:: console
+
+    sudo modprobe uio_pci_generic
+
+.. note::
+
+    ``uio_pci_generic`` module doesn't support the creation of virtual functions.
+
+As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
+module which can be found in the kmod subdirectory referred to above. It can
+be loaded as shown below:
+
+.. code-block:: console
+
+    sudo modprobe uio
+    sudo insmod kmod/igb_uio.ko
+
+.. note::
+
+    For some devices which lack support for legacy interrupts, e.g. virtual function
+    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
+
+Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional
+for platforms that support using VFIO.
+
+VFIO
+----
+
+A more robust and secure driver in compare to the ``UIO``, relying on IOMMU protection.
+To make use of VFIO, the ``vfio-pci`` module must be loaded:
+
+.. code-block:: console
+
+    sudo modprobe vfio-pci
+
+Note that in order to use VFIO, your kernel must support it.
+VFIO kernel modules have been included in the Linux kernel since version 3.6.0 and are usually present by default,
+however please consult your distributions documentation to make sure that is the case.
+
+Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
+
+.. note::
+
+    ``vfio-pci`` module doesn't support the creation of virtual functions.
+
+For proper operation of VFIO when running DPDK applications as a non-privileged user, correct permissions should also be set up.
+This can be done by using the DPDK setup script (called dpdk-setup.sh and located in the usertools directory).
+
+.. note::
+
+    VFIO can be used without IOMMU. While this is just as unsafe as using UIO, it does make it possible for the user to keep the degree of device access and programming that VFIO has, in situations where IOMMU is not available.
+
+Bifurcated Driver
+-----------------
+
+PMDs which use the bifurcated driver co-exists with the device kernel driver.
+On such model the NIC is controlled by the kernel, while the data
+path is performed by the PMD directly on top of the device.
+
+Such model has the following benefits:
+
+ - It is secure and robust, as the memory management and isolation
+   is done by the kernel.
+ - It enables the user to use legacy linux tools such as ``ethtool`` or
+   ``ifconfig`` while running DPDK application on the same network ports.
+ - It enables the DPDK application to filter only part of the traffic,
+   While the rest will be directed and handled by the kernel driver.
+
+More about the bifurcated driver can be found in
+`Mellanox Bifurcated DPDK PMD
+<https://dpdksummit.com/Archive/pdf/2016Userspace/Day02-Session04-RonyEfraim-Userspace2016.pdf>`__.
+
+.. _linux_gsg_binding_kernel:
+
+Binding and Unbinding Network Ports to/from the Kernel Modules
+--------------------------------------------------------------
+
+.. note::
+
+    PMDs Which use the bifurcated driver should not be unbind from their kernel drivers. this section is for PMDs which use the UIO or VFIO drivers.
+
+As of release 1.4, DPDK applications no longer automatically unbind all supported network ports from the kernel driver in use.
+Instead, in case the PMD being used use the UIO or VFIO drivers, all ports that are to be used by an DPDK application must be bound to the
+``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
+For such PMDs, any network ports under Linux* control will be ignored and cannot be used by the application.
+
+To bind ports to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module for DPDK use,
+and then subsequently return ports to Linux* control,
+a utility script called dpdk-devbind.py is provided in the usertools subdirectory.
+This utility can be used to provide a view of the current state of the network ports on the system,
+and to bind and unbind those ports from the different kernel modules, including the uio and vfio modules.
+The following are some examples of how the script can be used.
+A full description of the script and its parameters can be obtained by calling the script with the ``--help`` or ``--usage`` options.
+Note that the uio or vfio kernel modules to be used, should be loaded into the kernel before
+running the ``dpdk-devbind.py`` script.
+
+.. warning::
+
+    Due to the way VFIO works, there are certain limitations to which devices can be used with VFIO.
+    Mainly it comes down to how IOMMU groups work.
+    Any Virtual Function device can be used with VFIO on its own, but physical devices will require either all ports bound to VFIO,
+    or some of them bound to VFIO while others not being bound to anything at all.
+
+    If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU group in which your device is in.
+    Therefore, the bridge driver should also be unbound from the bridge PCI device for VFIO to work with devices behind the bridge.
+
+.. warning::
+
+    While any user can run the dpdk-devbind.py script to view the status of the network ports,
+    binding or unbinding network ports requires root privileges.
+
+To see the status of all network ports on the system:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --status
+
+    Network devices using DPDK-compatible driver
+    ============================================
+    0000:82:00.0 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
+    0000:82:00.1 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
+
+    Network devices using kernel driver
+    ===================================
+    0000:04:00.0 'I350 1-GbE NIC' if=em0  drv=igb unused=uio_pci_generic *Active*
+    0000:04:00.1 'I350 1-GbE NIC' if=eth1 drv=igb unused=uio_pci_generic
+    0000:04:00.2 'I350 1-GbE NIC' if=eth2 drv=igb unused=uio_pci_generic
+    0000:04:00.3 'I350 1-GbE NIC' if=eth3 drv=igb unused=uio_pci_generic
+
+    Other network devices
+    =====================
+    <none>
+
+To bind device ``eth1``,``04:00.1``, to the ``uio_pci_generic`` driver:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 04:00.1
+
+or, alternatively,
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1
+
+To restore device ``82:00.0`` to its original kernel binding:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=ixgbe 82:00.0
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index 709113dcb..2653c5de6 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -186,22 +186,8 @@ Configurations before running DPDK
    **Note**: To get the best performance, ensure that the core and NICs are in the same socket.
    In the example above ``85:00.0`` is on socket 1 and should be used by cores on socket 1 for the best performance.
 
-4. Bind the test ports to DPDK compatible drivers, such as igb_uio. For example bind two ports to a DPDK compatible driver and check the status:
-
-   .. code-block:: console
-
-
-      # Bind ports 82:00.0 and 85:00.0 to dpdk driver
-      ./dpdk_folder/usertools/dpdk-devbind.py -b igb_uio 82:00.0 85:00.0
-
-      # Check the port driver status
-      ./dpdk_folder/usertools/dpdk-devbind.py --status
-
-   See ``dpdk-devbind.py --help`` for more details.
-
-
-More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk`.
-
+4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
+More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
 
 Example of getting best performance for an Intel NIC
 ----------------------------------------------------
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
@ 2017-07-26 17:17 ` Shahaf Shuler
  2017-07-28 10:14   ` Mcnamara, John
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-26 17:17 UTC (permalink / raw)
  To: thomas; +Cc: dev

UIO is not a must for all PMDs.

Cleaning up the Linux Getting Started Guide from this hard requirement.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/linux_gsg/build_sample_apps.rst | 10 ++++++----
 doc/guides/linux_gsg/sys_reqs.rst          |  2 --
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 12fefffdd..0cc5fd173 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -98,12 +98,14 @@ Running a Sample Application
 
 .. warning::
 
-    The UIO drivers and hugepages must be setup prior to running an application.
+    Before running the application make sure:
 
-.. warning::
+    - Hugepages setup is done.
+    - Any kernel driver being used is loaded.
+    - In case needed, ports being used by the application should be
+      bound to the corresponding kernel driver.
 
-    Any ports to be used by the application must be already bound to an appropriate kernel
-    module, as described in :ref:`linux_gsg_binding_kernel`, prior to running the application.
+    refer to :ref:`linux_gsg_linux_drivers` for more details.
 
 The application is linked with the DPDK target environment's Environmental Abstraction Layer (EAL) library,
 which provides some options that are generic to every DPDK application.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index af7a93121..375bec8c9 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -131,8 +131,6 @@ System Software
 
     For other kernel builds, options which should be enabled for DPDK include:
 
-    *   UIO support
-
     *   HUGETLBFS
 
     *   PROC_PAGE_MONITOR  support
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
@ 2017-07-26 17:17 ` Shahaf Shuler
  2017-07-28 10:16   ` Mcnamara, John
  2017-07-28 10:13 ` [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Mcnamara, John
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-26 17:17 UTC (permalink / raw)
  To: thomas; +Cc: dev

The Linux Getting Started Guide contains
parts which are specific for i40e PMD. This results
in confusion for users which read the guide at their
first try with DPDK.

Moving those parts to the i40e NIC manual.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/linux_gsg/enable_func.rst             | 25 --------
 doc/guides/linux_gsg/nic_perf_intel_platform.rst | 56 -----------------
 doc/guides/linux_gsg/sys_reqs.rst                |  2 +-
 doc/guides/nics/i40e.rst                         | 79 ++++++++++++++++++++++++
 4 files changed, 80 insertions(+), 82 deletions(-)

diff --git a/doc/guides/linux_gsg/enable_func.rst b/doc/guides/linux_gsg/enable_func.rst
index 15f53b166..e6c806b0d 100644
--- a/doc/guides/linux_gsg/enable_func.rst
+++ b/doc/guides/linux_gsg/enable_func.rst
@@ -176,28 +176,3 @@ Also, if ``INTEL_IOMMU_DEFAULT_ON`` is not set in the kernel, the ``intel_iommu=
 This ensures that the Intel IOMMU is being initialized as expected.
 
 Please note that while using ``iommu=pt`` is compulsory for ``igb_uio driver``, the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
-
-High Performance of Small Packets on 40G NIC
---------------------------------------------
-
-As there might be firmware fixes for performance enhancement in latest version
-of firmware image, the firmware update might be needed for getting high performance.
-Check with the local Intel's Network Division application engineers for firmware updates.
-Users should consult the release notes specific to a DPDK release to identify
-the validated firmware version for a NIC using the i40e driver.
-
-Use 16 Bytes RX Descriptor Size
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
-Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
-
-High Performance and per Packet Latency Tradeoff
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Due to the hardware design, the interrupt signal inside NIC is needed for per
-packet descriptor write-back. The minimum interval of interrupts could be set
-at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
-Though there is a default configuration, the interval could be tuned by the
-users with that configuration item depends on what the user cares about more,
-performance or per packet latency.
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index 2653c5de6..febd73378 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -188,59 +188,3 @@ Configurations before running DPDK
 
 4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
 More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
-
-Example of getting best performance for an Intel NIC
-----------------------------------------------------
-
-The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
-Intel server platform and Intel XL710 NICs.
-For specific 40G NIC configuration please refer to the i40e NIC guide.
-
-The example scenario is to get best performance with two Intel XL710 40GbE ports.
-See :numref:`figure_intel_perf_test_setup` for the performance test setup.
-
-.. _figure_intel_perf_test_setup:
-
-.. figure:: img/intel_perf_test_setup.*
-
-   Performance Test Setup
-
-
-1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
-   The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
-   for two 40G ports, but two different PCIe Gen3 x8 slot can.
-   Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
-
-      82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
-      85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
-
-2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
-
-3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
-   In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
-   are 18-35 and 54-71.
-   Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
-   cores from different cores (e.g core18 and core19).
-
-4. Bind these two ports to igb_uio.
-
-5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
-   will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
-
-6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
-   Compile the ``l3fwd sample`` with the default lpm mode.
-
-7. The command line of running l3fwd would be something like the followings::
-
-      ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
-              -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
-
-   This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
-   core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
-
-
-8. Configure the traffic at a traffic generator.
-
-   * Start creating a stream on packet generator.
-
-   * Set the Ethernet II type to 0x0800.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 375bec8c9..0f98876d3 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -43,7 +43,7 @@ BIOS Setting Prerequisite on x86
 
 For the majority of platforms, no special BIOS settings are needed to use basic DPDK functionality.
 However, for additional HPET timer and power management functionality,
-and high performance of small packets on 40G NIC, BIOS setting changes may be needed.
+and high performance of small packets, BIOS setting changes may be needed.
 Consult the section on :ref:`Enabling Additional Functionality <Enabling_Additional_Functionality>`
 for more information on the required changes.
 
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index a0262a9f7..bc200d39d 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -464,3 +464,82 @@ enabled using the steps below.
 #. Set the PCI configure register with new value::
 
       setpci -s <XX:XX.X> a8.w=<value>
+
+High Performance of Small Packets on 40G NIC
+--------------------------------------------
+
+As there might be firmware fixes for performance enhancement in latest version
+of firmware image, the firmware update might be needed for getting high performance.
+Check with the local Intel's Network Division application engineers for firmware updates.
+Users should consult the release notes specific to a DPDK release to identify
+the validated firmware version for a NIC using the i40e driver.
+
+Use 16 Bytes RX Descriptor Size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
+Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
+
+High Performance and per Packet Latency Tradeoff
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Due to the hardware design, the interrupt signal inside NIC is needed for per
+packet descriptor write-back. The minimum interval of interrupts could be set
+at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
+Though there is a default configuration, the interval could be tuned by the
+users with that configuration item depends on what the user cares about more,
+performance or per packet latency.
+
+Example of getting best performance with l3fwd example
+------------------------------------------------------
+
+The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
+Intel server platform and Intel XL710 NICs.
+
+The example scenario is to get best performance with two Intel XL710 40GbE ports.
+See :numref:`figure_intel_perf_test_setup` for the performance test setup.
+
+.. _figure_intel_perf_test_setup:
+
+.. figure:: img/intel_perf_test_setup.*
+
+   Performance Test Setup
+
+
+1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
+   The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
+   for two 40G ports, but two different PCIe Gen3 x8 slot can.
+   Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
+
+      82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
+      85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
+
+2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
+
+3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
+   In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
+   are 18-35 and 54-71.
+   Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
+   cores from different cores (e.g core18 and core19).
+
+4. Bind these two ports to igb_uio.
+
+5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
+   will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
+
+6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
+   Compile the ``l3fwd sample`` with the default lpm mode.
+
+7. The command line of running l3fwd would be something like the following::
+
+      ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+              -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
+
+   This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
+   core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
+
+8. Configure the traffic at a traffic generator.
+
+   * Start creating a stream on packet generator.
+
+   * Set the Ethernet II type to 0x0800.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
@ 2017-07-28 10:13 ` Mcnamara, John
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 " Shahaf Shuler
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Mcnamara, John @ 2017-07-28 10:13 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Wednesday, July 26, 2017 6:18 PM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter
> 
> The UIO and VFIO sections should not be part of the "Compiling the DPDK
> Target from Source" chapter, as it is PMD specific and not true for all
> PMDs.
> 
> Instead, moving those sections to a new chapter which include all kernel
> drivers being used along with the different PMDs.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>

Acked-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
@ 2017-07-28 10:14   ` Mcnamara, John
  0 siblings, 0 replies; 10+ messages in thread
From: Mcnamara, John @ 2017-07-28 10:14 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Wednesday, July 26, 2017 6:18 PM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement
> 
> UIO is not a must for all PMDs.
> 
> Cleaning up the Linux Getting Started Guide from this hard requirement.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>


Acked-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide
  2017-07-26 17:17 ` [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
@ 2017-07-28 10:16   ` Mcnamara, John
  0 siblings, 0 replies; 10+ messages in thread
From: Mcnamara, John @ 2017-07-28 10:16 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Wednesday, July 26, 2017 6:18 PM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide
> 
> The Linux Getting Started Guide contains parts which are specific for i40e
> PMD. This results in confusion for users which read the guide at their
> first try with DPDK.
> 
> Moving those parts to the i40e NIC manual.

This patch also needs to move the image references in the test:

    mv doc/guides/linux_gsg/img/intel_perf_test_setup.svg doc/guides/nics/img/

When you make that change you can apply my ack to patches 1-3.

P.S. Thanks for the cleanup.

John

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 1/3] doc: move kernel drivers to a new chapter
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
                   ` (2 preceding siblings ...)
  2017-07-28 10:13 ` [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Mcnamara, John
@ 2017-07-29 16:17 ` Shahaf Shuler
  2017-07-31 22:10   ` Thomas Monjalon
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
  5 siblings, 1 reply; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-29 16:17 UTC (permalink / raw)
  To: thomas, john.mcnamara; +Cc: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 17947 bytes --]

The UIO and VFIO sections should not be part of
the "Compiling the DPDK Target from Source" chapter,
as it is PMD specific and not true for all PMDs.

Instead, moving those sections to a new chapter
which include all kernel drivers being used along with
the different PMDs.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/linux_gsg/build_dpdk.rst              | 135 ----------------
 doc/guides/linux_gsg/index.rst                   |   1 +
 doc/guides/linux_gsg/linux_drivers.rst           | 197 +++++++++++++++++++++++
 doc/guides/linux_gsg/nic_perf_intel_platform.rst |  18 +--
 4 files changed, 200 insertions(+), 151 deletions(-)
 create mode 100644 doc/guides/linux_gsg/linux_drivers.rst

diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index cf6c06d64..e32afd5f8 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -143,138 +143,3 @@ Browsing the Installed DPDK Environment Target
 Once a target is created it contains all libraries, including poll-mode drivers, and header files for the DPDK environment that are required to build customer applications.
 In addition, the test and testpmd applications are built under the build/app directory, which may be used for testing.
 A kmod  directory is also present that contains kernel modules which may be loaded if needed.
-
-Loading Modules to Enable Userspace IO for DPDK
------------------------------------------------
-
-To run any DPDK application, a suitable uio module can be loaded into the running kernel.
-In many cases, the standard ``uio_pci_generic`` module included in the Linux kernel
-can provide the uio capability. This module can be loaded using the command
-
-.. code-block:: console
-
-    sudo modprobe uio_pci_generic
-
-.. note::
-
-    ``uio_pci_generic`` module doesn't support the creation of virtual functions.
-
-As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
-module which can be found in the kmod subdirectory referred to above. It can
-be loaded as shown below:
-
-.. code-block:: console
-
-    sudo modprobe uio
-    sudo insmod kmod/igb_uio.ko
-
-.. note::
-
-    For some devices which lack support for legacy interrupts, e.g. virtual function
-    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
-
-Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional
-for platforms that support using VFIO.
-
-Loading VFIO Module
--------------------
-
-To run an DPDK application and make use of VFIO, the ``vfio-pci`` module must be loaded:
-
-.. code-block:: console
-
-    sudo modprobe vfio-pci
-
-Note that in order to use VFIO, your kernel must support it.
-VFIO kernel modules have been included in the Linux kernel since version 3.6.0 and are usually present by default,
-however please consult your distributions documentation to make sure that is the case.
-
-Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
-
-.. note::
-
-    ``vfio-pci`` module doesn't support the creation of virtual functions.
-
-For proper operation of VFIO when running DPDK applications as a non-privileged user, correct permissions should also be set up.
-This can be done by using the DPDK setup script (called dpdk-setup.sh and located in the usertools directory).
-
-.. _linux_gsg_binding_kernel:
-
-Binding and Unbinding Network Ports to/from the Kernel Modules
---------------------------------------------------------------
-
-As of release 1.4, DPDK applications no longer automatically unbind all supported network ports from the kernel driver in use.
-Instead, all ports that are to be used by an DPDK application must be bound to the
-``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
-Any network ports under Linux* control will be ignored by the DPDK poll-mode drivers and cannot be used by the application.
-
-.. warning::
-
-    The DPDK will, by default, no longer automatically unbind network ports from the kernel driver at startup.
-    Any ports to be used by an DPDK application must be unbound from Linux* control and
-    bound to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
-
-To bind ports to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module for DPDK use,
-and then subsequently return ports to Linux* control,
-a utility script called dpdk-devbind.py is provided in the usertools subdirectory.
-This utility can be used to provide a view of the current state of the network ports on the system,
-and to bind and unbind those ports from the different kernel modules, including the uio and vfio modules.
-The following are some examples of how the script can be used.
-A full description of the script and its parameters can be obtained by calling the script with the ``--help`` or ``--usage`` options.
-Note that the uio or vfio kernel modules to be used, should be loaded into the kernel before
-running the ``dpdk-devbind.py`` script.
-
-.. warning::
-
-    Due to the way VFIO works, there are certain limitations to which devices can be used with VFIO.
-    Mainly it comes down to how IOMMU groups work.
-    Any Virtual Function device can be used with VFIO on its own, but physical devices will require either all ports bound to VFIO,
-    or some of them bound to VFIO while others not being bound to anything at all.
-
-    If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU group in which your device is in.
-    Therefore, the bridge driver should also be unbound from the bridge PCI device for VFIO to work with devices behind the bridge.
-
-.. warning::
-
-    While any user can run the dpdk-devbind.py script to view the status of the network ports,
-    binding or unbinding network ports requires root privileges.
-
-To see the status of all network ports on the system:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --status
-
-    Network devices using DPDK-compatible driver
-    ============================================
-    0000:82:00.0 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
-    0000:82:00.1 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
-
-    Network devices using kernel driver
-    ===================================
-    0000:04:00.0 'I350 1-GbE NIC' if=em0  drv=igb unused=uio_pci_generic *Active*
-    0000:04:00.1 'I350 1-GbE NIC' if=eth1 drv=igb unused=uio_pci_generic
-    0000:04:00.2 'I350 1-GbE NIC' if=eth2 drv=igb unused=uio_pci_generic
-    0000:04:00.3 'I350 1-GbE NIC' if=eth3 drv=igb unused=uio_pci_generic
-
-    Other network devices
-    =====================
-    <none>
-
-To bind device ``eth1``,``04:00.1``, to the ``uio_pci_generic`` driver:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 04:00.1
-
-or, alternatively,
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1
-
-To restore device ``82:00.0`` to its original kernel binding:
-
-.. code-block:: console
-
-    ./usertools/dpdk-devbind.py --bind=ixgbe 82:00.0
diff --git a/doc/guides/linux_gsg/index.rst b/doc/guides/linux_gsg/index.rst
index 3d3ada15e..799559c22 100644
--- a/doc/guides/linux_gsg/index.rst
+++ b/doc/guides/linux_gsg/index.rst
@@ -40,6 +40,7 @@ Getting Started Guide for Linux
     intro
     sys_reqs
     build_dpdk
+    linux_drivers
     build_sample_apps
     enable_func
     quick_start
diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
new file mode 100644
index 000000000..b3525c0b2
--- /dev/null
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -0,0 +1,197 @@
+..  BSD LICENSE
+    Copyright(c) 2010-2015 Intel Corporation.
+    Copyright(c) 2017 Mellanox Corporation.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+.. _linux_gsg_linux_drivers:
+
+Linux Drivers
+=============
+
+Different PMDs may require different kernel drivers in order to work properly.
+Depends on the PMD being used, a corresponding kernel driver should be load
+and bind to the network ports.
+
+UIO
+---
+
+A small kernel module to set up the device, map device memory to user-space and register interrupts.
+In many cases, the standard ``uio_pci_generic`` module included in the Linux kernel
+can provide the uio capability. This module can be loaded using the command:
+
+.. code-block:: console
+
+    sudo modprobe uio_pci_generic
+
+.. note::
+
+    ``uio_pci_generic`` module doesn't support the creation of virtual functions.
+
+As an alternative to the ``uio_pci_generic``, the DPDK also includes the igb_uio
+module which can be found in the kmod subdirectory referred to above. It can
+be loaded as shown below:
+
+.. code-block:: console
+
+    sudo modprobe uio
+    sudo insmod kmod/igb_uio.ko
+
+.. note::
+
+    For some devices which lack support for legacy interrupts, e.g. virtual function
+    (VF) devices, the ``igb_uio`` module may be needed in place of ``uio_pci_generic``.
+
+Since DPDK release 1.7 onward provides VFIO support, use of UIO is optional
+for platforms that support using VFIO.
+
+VFIO
+----
+
+A more robust and secure driver in compare to the ``UIO``, relying on IOMMU protection.
+To make use of VFIO, the ``vfio-pci`` module must be loaded:
+
+.. code-block:: console
+
+    sudo modprobe vfio-pci
+
+Note that in order to use VFIO, your kernel must support it.
+VFIO kernel modules have been included in the Linux kernel since version 3.6.0 and are usually present by default,
+however please consult your distributions documentation to make sure that is the case.
+
+Also, to use VFIO, both kernel and BIOS must support and be configured to use IO virtualization (such as Intel® VT-d).
+
+.. note::
+
+    ``vfio-pci`` module doesn't support the creation of virtual functions.
+
+For proper operation of VFIO when running DPDK applications as a non-privileged user, correct permissions should also be set up.
+This can be done by using the DPDK setup script (called dpdk-setup.sh and located in the usertools directory).
+
+.. note::
+
+    VFIO can be used without IOMMU. While this is just as unsafe as using UIO, it does make it possible for the user to keep the degree of device access and programming that VFIO has, in situations where IOMMU is not available.
+
+Bifurcated Driver
+-----------------
+
+PMDs which use the bifurcated driver co-exists with the device kernel driver.
+On such model the NIC is controlled by the kernel, while the data
+path is performed by the PMD directly on top of the device.
+
+Such model has the following benefits:
+
+ - It is secure and robust, as the memory management and isolation
+   is done by the kernel.
+ - It enables the user to use legacy linux tools such as ``ethtool`` or
+   ``ifconfig`` while running DPDK application on the same network ports.
+ - It enables the DPDK application to filter only part of the traffic,
+   While the rest will be directed and handled by the kernel driver.
+
+More about the bifurcated driver can be found in
+`Mellanox Bifurcated DPDK PMD
+<https://dpdksummit.com/Archive/pdf/2016Userspace/Day02-Session04-RonyEfraim-Userspace2016.pdf>`__.
+
+.. _linux_gsg_binding_kernel:
+
+Binding and Unbinding Network Ports to/from the Kernel Modules
+--------------------------------------------------------------
+
+.. note::
+
+    PMDs Which use the bifurcated driver should not be unbind from their kernel drivers. this section is for PMDs which use the UIO or VFIO drivers.
+
+As of release 1.4, DPDK applications no longer automatically unbind all supported network ports from the kernel driver in use.
+Instead, in case the PMD being used use the UIO or VFIO drivers, all ports that are to be used by an DPDK application must be bound to the
+``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module before the application is run.
+For such PMDs, any network ports under Linux* control will be ignored and cannot be used by the application.
+
+To bind ports to the ``uio_pci_generic``, ``igb_uio`` or ``vfio-pci`` module for DPDK use,
+and then subsequently return ports to Linux* control,
+a utility script called dpdk-devbind.py is provided in the usertools subdirectory.
+This utility can be used to provide a view of the current state of the network ports on the system,
+and to bind and unbind those ports from the different kernel modules, including the uio and vfio modules.
+The following are some examples of how the script can be used.
+A full description of the script and its parameters can be obtained by calling the script with the ``--help`` or ``--usage`` options.
+Note that the uio or vfio kernel modules to be used, should be loaded into the kernel before
+running the ``dpdk-devbind.py`` script.
+
+.. warning::
+
+    Due to the way VFIO works, there are certain limitations to which devices can be used with VFIO.
+    Mainly it comes down to how IOMMU groups work.
+    Any Virtual Function device can be used with VFIO on its own, but physical devices will require either all ports bound to VFIO,
+    or some of them bound to VFIO while others not being bound to anything at all.
+
+    If your device is behind a PCI-to-PCI bridge, the bridge will then be part of the IOMMU group in which your device is in.
+    Therefore, the bridge driver should also be unbound from the bridge PCI device for VFIO to work with devices behind the bridge.
+
+.. warning::
+
+    While any user can run the dpdk-devbind.py script to view the status of the network ports,
+    binding or unbinding network ports requires root privileges.
+
+To see the status of all network ports on the system:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --status
+
+    Network devices using DPDK-compatible driver
+    ============================================
+    0000:82:00.0 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
+    0000:82:00.1 '82599EB 10-GbE NIC' drv=uio_pci_generic unused=ixgbe
+
+    Network devices using kernel driver
+    ===================================
+    0000:04:00.0 'I350 1-GbE NIC' if=em0  drv=igb unused=uio_pci_generic *Active*
+    0000:04:00.1 'I350 1-GbE NIC' if=eth1 drv=igb unused=uio_pci_generic
+    0000:04:00.2 'I350 1-GbE NIC' if=eth2 drv=igb unused=uio_pci_generic
+    0000:04:00.3 'I350 1-GbE NIC' if=eth3 drv=igb unused=uio_pci_generic
+
+    Other network devices
+    =====================
+    <none>
+
+To bind device ``eth1``,``04:00.1``, to the ``uio_pci_generic`` driver:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 04:00.1
+
+or, alternatively,
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth1
+
+To restore device ``82:00.0`` to its original kernel binding:
+
+.. code-block:: console
+
+    ./usertools/dpdk-devbind.py --bind=ixgbe 82:00.0
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index 709113dcb..2653c5de6 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -186,22 +186,8 @@ Configurations before running DPDK
    **Note**: To get the best performance, ensure that the core and NICs are in the same socket.
    In the example above ``85:00.0`` is on socket 1 and should be used by cores on socket 1 for the best performance.
 
-4. Bind the test ports to DPDK compatible drivers, such as igb_uio. For example bind two ports to a DPDK compatible driver and check the status:
-
-   .. code-block:: console
-
-
-      # Bind ports 82:00.0 and 85:00.0 to dpdk driver
-      ./dpdk_folder/usertools/dpdk-devbind.py -b igb_uio 82:00.0 85:00.0
-
-      # Check the port driver status
-      ./dpdk_folder/usertools/dpdk-devbind.py --status
-
-   See ``dpdk-devbind.py --help`` for more details.
-
-
-More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk`.
-
+4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
+More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
 
 Example of getting best performance for an Intel NIC
 ----------------------------------------------------
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 2/3] doc: cleanup UIO hard requirement
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
                   ` (3 preceding siblings ...)
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 " Shahaf Shuler
@ 2017-07-29 16:17 ` Shahaf Shuler
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
  5 siblings, 0 replies; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-29 16:17 UTC (permalink / raw)
  To: thomas, john.mcnamara; +Cc: dev

UIO is not a must for all PMDs.

Cleaning up the Linux Getting Started Guide from this hard requirement.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/linux_gsg/build_sample_apps.rst | 10 ++++++----
 doc/guides/linux_gsg/sys_reqs.rst          |  2 --
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst
index 12fefffdd..0cc5fd173 100644
--- a/doc/guides/linux_gsg/build_sample_apps.rst
+++ b/doc/guides/linux_gsg/build_sample_apps.rst
@@ -98,12 +98,14 @@ Running a Sample Application
 
 .. warning::
 
-    The UIO drivers and hugepages must be setup prior to running an application.
+    Before running the application make sure:
 
-.. warning::
+    - Hugepages setup is done.
+    - Any kernel driver being used is loaded.
+    - In case needed, ports being used by the application should be
+      bound to the corresponding kernel driver.
 
-    Any ports to be used by the application must be already bound to an appropriate kernel
-    module, as described in :ref:`linux_gsg_binding_kernel`, prior to running the application.
+    refer to :ref:`linux_gsg_linux_drivers` for more details.
 
 The application is linked with the DPDK target environment's Environmental Abstraction Layer (EAL) library,
 which provides some options that are generic to every DPDK application.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index af7a93121..375bec8c9 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -131,8 +131,6 @@ System Software
 
     For other kernel builds, options which should be enabled for DPDK include:
 
-    *   UIO support
-
     *   HUGETLBFS
 
     *   PROC_PAGE_MONITOR  support
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 3/3] doc: move i40e specific to i40e guide
  2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
                   ` (4 preceding siblings ...)
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
@ 2017-07-29 16:17 ` Shahaf Shuler
  5 siblings, 0 replies; 10+ messages in thread
From: Shahaf Shuler @ 2017-07-29 16:17 UTC (permalink / raw)
  To: thomas, john.mcnamara; +Cc: dev

The Linux Getting Started Guide contains
parts which are specific for i40e PMD. This results
in confusion for users which read the guide at their
first try with DPDK.

Moving those parts to the i40e NIC manual.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
on v2:
 - Moved intel_perf_test_setup.svg to proper location.
---
 doc/guides/linux_gsg/enable_func.rst               | 25 -------
 doc/guides/linux_gsg/nic_perf_intel_platform.rst   | 56 ---------------
 doc/guides/linux_gsg/sys_reqs.rst                  |  2 +-
 doc/guides/nics/i40e.rst                           | 79 ++++++++++++++++++++++
 .../img/intel_perf_test_setup.svg                  |  0
 5 files changed, 80 insertions(+), 82 deletions(-)
 rename doc/guides/{linux_gsg => nics}/img/intel_perf_test_setup.svg (100%)

diff --git a/doc/guides/linux_gsg/enable_func.rst b/doc/guides/linux_gsg/enable_func.rst
index 15f53b166..e6c806b0d 100644
--- a/doc/guides/linux_gsg/enable_func.rst
+++ b/doc/guides/linux_gsg/enable_func.rst
@@ -176,28 +176,3 @@ Also, if ``INTEL_IOMMU_DEFAULT_ON`` is not set in the kernel, the ``intel_iommu=
 This ensures that the Intel IOMMU is being initialized as expected.
 
 Please note that while using ``iommu=pt`` is compulsory for ``igb_uio driver``, the ``vfio-pci`` driver can actually work with both ``iommu=pt`` and ``iommu=on``.
-
-High Performance of Small Packets on 40G NIC
---------------------------------------------
-
-As there might be firmware fixes for performance enhancement in latest version
-of firmware image, the firmware update might be needed for getting high performance.
-Check with the local Intel's Network Division application engineers for firmware updates.
-Users should consult the release notes specific to a DPDK release to identify
-the validated firmware version for a NIC using the i40e driver.
-
-Use 16 Bytes RX Descriptor Size
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
-Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
-
-High Performance and per Packet Latency Tradeoff
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Due to the hardware design, the interrupt signal inside NIC is needed for per
-packet descriptor write-back. The minimum interval of interrupts could be set
-at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
-Though there is a default configuration, the interval could be tuned by the
-users with that configuration item depends on what the user cares about more,
-performance or per packet latency.
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index 2653c5de6..febd73378 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -188,59 +188,3 @@ Configurations before running DPDK
 
 4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
 More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
-
-Example of getting best performance for an Intel NIC
-----------------------------------------------------
-
-The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
-Intel server platform and Intel XL710 NICs.
-For specific 40G NIC configuration please refer to the i40e NIC guide.
-
-The example scenario is to get best performance with two Intel XL710 40GbE ports.
-See :numref:`figure_intel_perf_test_setup` for the performance test setup.
-
-.. _figure_intel_perf_test_setup:
-
-.. figure:: img/intel_perf_test_setup.*
-
-   Performance Test Setup
-
-
-1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
-   The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
-   for two 40G ports, but two different PCIe Gen3 x8 slot can.
-   Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
-
-      82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
-      85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
-
-2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
-
-3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
-   In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
-   are 18-35 and 54-71.
-   Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
-   cores from different cores (e.g core18 and core19).
-
-4. Bind these two ports to igb_uio.
-
-5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
-   will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
-
-6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
-   Compile the ``l3fwd sample`` with the default lpm mode.
-
-7. The command line of running l3fwd would be something like the followings::
-
-      ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
-              -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
-
-   This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
-   core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
-
-
-8. Configure the traffic at a traffic generator.
-
-   * Start creating a stream on packet generator.
-
-   * Set the Ethernet II type to 0x0800.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 375bec8c9..0f98876d3 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -43,7 +43,7 @@ BIOS Setting Prerequisite on x86
 
 For the majority of platforms, no special BIOS settings are needed to use basic DPDK functionality.
 However, for additional HPET timer and power management functionality,
-and high performance of small packets on 40G NIC, BIOS setting changes may be needed.
+and high performance of small packets, BIOS setting changes may be needed.
 Consult the section on :ref:`Enabling Additional Functionality <Enabling_Additional_Functionality>`
 for more information on the required changes.
 
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index a0262a9f7..bc200d39d 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -464,3 +464,82 @@ enabled using the steps below.
 #. Set the PCI configure register with new value::
 
       setpci -s <XX:XX.X> a8.w=<value>
+
+High Performance of Small Packets on 40G NIC
+--------------------------------------------
+
+As there might be firmware fixes for performance enhancement in latest version
+of firmware image, the firmware update might be needed for getting high performance.
+Check with the local Intel's Network Division application engineers for firmware updates.
+Users should consult the release notes specific to a DPDK release to identify
+the validated firmware version for a NIC using the i40e driver.
+
+Use 16 Bytes RX Descriptor Size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
+Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
+
+High Performance and per Packet Latency Tradeoff
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Due to the hardware design, the interrupt signal inside NIC is needed for per
+packet descriptor write-back. The minimum interval of interrupts could be set
+at compile time by ``CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL`` in configuration files.
+Though there is a default configuration, the interval could be tuned by the
+users with that configuration item depends on what the user cares about more,
+performance or per packet latency.
+
+Example of getting best performance with l3fwd example
+------------------------------------------------------
+
+The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with an
+Intel server platform and Intel XL710 NICs.
+
+The example scenario is to get best performance with two Intel XL710 40GbE ports.
+See :numref:`figure_intel_perf_test_setup` for the performance test setup.
+
+.. _figure_intel_perf_test_setup:
+
+.. figure:: img/intel_perf_test_setup.*
+
+   Performance Test Setup
+
+
+1. Add two Intel XL710 NICs to the platform, and use one port per card to get best performance.
+   The reason for using two NICs is to overcome a PCIe Gen3's limitation since it cannot provide 80G bandwidth
+   for two 40G ports, but two different PCIe Gen3 x8 slot can.
+   Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
+
+      82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
+      85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
+
+2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
+
+3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
+   In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
+   are 18-35 and 54-71.
+   Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
+   cores from different cores (e.g core18 and core19).
+
+4. Bind these two ports to igb_uio.
+
+5. As to XL710 40G port, we need at least two queue pairs to achieve best performance, then two queues per port
+   will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
+
+6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
+   Compile the ``l3fwd sample`` with the default lpm mode.
+
+7. The command line of running l3fwd would be something like the following::
+
+      ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
+              -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
+
+   This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
+   core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
+
+8. Configure the traffic at a traffic generator.
+
+   * Start creating a stream on packet generator.
+
+   * Set the Ethernet II type to 0x0800.
diff --git a/doc/guides/linux_gsg/img/intel_perf_test_setup.svg b/doc/guides/nics/img/intel_perf_test_setup.svg
similarity index 100%
rename from doc/guides/linux_gsg/img/intel_perf_test_setup.svg
rename to doc/guides/nics/img/intel_perf_test_setup.svg
-- 
2.12.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/3] doc: move kernel drivers to a new chapter
  2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 " Shahaf Shuler
@ 2017-07-31 22:10   ` Thomas Monjalon
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Monjalon @ 2017-07-31 22:10 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: dev, john.mcnamara

29/07/2017 18:17, Shahaf Shuler:
> The UIO and VFIO sections should not be part of
> the "Compiling the DPDK Target from Source" chapter,
> as it is PMD specific and not true for all PMDs.
> 
> Instead, moving those sections to a new chapter
> which include all kernel drivers being used along with
> the different PMDs.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>

Series applied, thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-07-31 22:10 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-26 17:17 [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Shahaf Shuler
2017-07-26 17:17 ` [dpdk-dev] [PATCH 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
2017-07-28 10:14   ` Mcnamara, John
2017-07-26 17:17 ` [dpdk-dev] [PATCH 3/3] doc: move i40e specific to i40e guide Shahaf Shuler
2017-07-28 10:16   ` Mcnamara, John
2017-07-28 10:13 ` [dpdk-dev] [PATCH 1/3] doc: move kernel drivers to a new chapter Mcnamara, John
2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 " Shahaf Shuler
2017-07-31 22:10   ` Thomas Monjalon
2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 2/3] doc: cleanup UIO hard requirement Shahaf Shuler
2017-07-29 16:17 ` [dpdk-dev] [PATCH v2 3/3] doc: move i40e specific to i40e guide Shahaf Shuler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).