Soft Patch Panel
 help / color / mirror / Atom feed
From: ogawa.yasufumi@lab.ntt.co.jp
To: ferruh.yigit@intel.com, spp@dpdk.org, ogawa.yasufumi@lab.ntt.co.jp
Subject: [spp] [PATCH 10/11] docs: move usecases to doc root
Date: Wed,  9 Jan 2019 10:49:31 +0900	[thread overview]
Message-ID: <1546998573-26108-11-git-send-email-ogawa.yasufumi@lab.ntt.co.jp> (raw)
In-Reply-To: <1546998573-26108-1-git-send-email-ogawa.yasufumi@lab.ntt.co.jp>

From: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>

Use cases is a subsection of Getting Started. However, types of SPP
secondary have increased, so use cases section should be moved to doc
root and have subsections for each of SPP secondary processes. This
update is to add Use cases section.

Signed-off-by: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>
---
 docs/guides/index.rst             |   2 +
 docs/guides/setup/index.rst       |   1 -
 docs/guides/setup/use_cases.rst   | 670 -------------------------------------
 docs/guides/use_cases/index.rst   |  11 +
 docs/guides/use_cases/spp_nfv.rst | 671 ++++++++++++++++++++++++++++++++++++++
 5 files changed, 684 insertions(+), 671 deletions(-)
 delete mode 100644 docs/guides/setup/use_cases.rst
 create mode 100644 docs/guides/use_cases/index.rst
 create mode 100644 docs/guides/use_cases/spp_nfv.rst

diff --git a/docs/guides/index.rst b/docs/guides/index.rst
index c2be94c..8a98755 100644
--- a/docs/guides/index.rst
+++ b/docs/guides/index.rst
@@ -1,5 +1,6 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2010-2014 Intel Corporation
+    Copyright(c) 2017-2019 Nippon Telegraph and Telephone Corporation
 
 SPP documentation
 =================
@@ -10,6 +11,7 @@ SPP documentation
    overview
    design/index
    setup/index
+   use_cases/index
    commands/index
    tools/index
    spp_vf/index
diff --git a/docs/guides/setup/index.rst b/docs/guides/setup/index.rst
index c184f75..bc8d8a6 100644
--- a/docs/guides/setup/index.rst
+++ b/docs/guides/setup/index.rst
@@ -10,5 +10,4 @@ Setup Guide
 
    getting_started
    howto_use
-   use_cases
    performance_opt
diff --git a/docs/guides/setup/use_cases.rst b/docs/guides/setup/use_cases.rst
deleted file mode 100644
index d768ea8..0000000
--- a/docs/guides/setup/use_cases.rst
+++ /dev/null
@@ -1,670 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2010-2014 Intel Corporation
-
-Use Cases
-=========
-
-.. _single_spp_nfv:
-
-Single spp_nfv
---------------
-
-The most simple usecase mainly for testing performance of packet
-forwarding on host.
-One ``spp_nfv`` and two physical ports.
-
-In this usecase, try to configure two senarios.
-
-- Configure ``spp_nfv`` as L2fwd
-- Configure ``spp_nfv`` for Loopback
-
-
-First of all, Check the status of ``spp_nfv`` from SPP CLI.
-
-.. code-block:: console
-
-    spp > nfv 1; status
-    - status: idling
-    - ports:
-      - phy:0
-      - phy:1
-
-This status message explains that ``nfv 1`` has two physical ports.
-
-
-Configure spp_nfv as L2fwd
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Assing the destination of ports with ``patch`` subcommand and
-start forwarding.
-Patch from ``phy:0`` to ``phy:1`` and ``phy:1`` to ``phy:0``,
-which means it is bi-directional connection.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 phy:1
-    Patch ports (phy:0 -> phy:1).
-    spp > nfv 1; patch phy:1 phy:0
-    Patch ports (phy:1 -> phy:0).
-    spp > nfv 1; forward
-    Start forwarding.
-
-Confirm that status of ``nfv 1`` is updated to ``running`` and ports are
-patches as you defined.
-
-.. code-block:: console
-
-    spp > nfv 1; status
-    - status: running
-    - ports:
-      - phy:0 -> phy:1
-      - phy:1 -> phy:0
-
-.. _figure_spp_nfv_as_l2fwd:
-
-.. figure:: ../images/setup/use_cases/spp_nfv_l2fwd.*
-   :width: 52%
-
-   spp_nfv as l2fwd
-
-
-Stop forwarding and reset patch to clear configuration.
-``patch reset`` is to clear all of patch configurations.
-
-.. code-block:: console
-
-    spp > nfv 1; stop
-    Stop forwarding.
-    spp > nfv 1; patch reset
-    Clear all of patches.
-
-
-Configure spp_nfv for Loopback
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Patch ``phy:0`` to ``phy:0`` and ``phy:1`` to ``phy:1``
-for loopback.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 phy:0
-    Patch ports (phy:0 -> phy:0).
-    spp > nfv 1; patch phy:1 phy:1
-    Patch ports (phy:1 -> phy:1).
-    spp > nfv 1; forward
-    Start forwarding.
-
-
-Dual spp_nfv
-------------
-
-Use case for testing performance of packet forwarding
-with two ``spp_nfv`` on host.
-Throughput is expected to be better than
-:ref:`Single spp_nfv<single_spp_nfv>`
-usecase because bi-directional forwarding of single ``spp_nfv`` is shared
-with two of uni-directional forwarding between dual ``spp_nfv``.
-
-In this usecase, configure two senarios almost similar to previous section.
-
-- Configure Two ``spp_nfv`` as L2fwd
-- Configure Two ``spp_nfv`` for Loopback
-
-
-Configure Two spp_nfv as L2fwd
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Assing the destination of ports with ``patch`` subcommand and
-start forwarding.
-Patch from ``phy:0`` to ``phy:1`` for ``nfv 1`` and
-from ``phy:1`` to ``phy:0`` for ``nfv 2``.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 phy:1
-    Patch ports (phy:0 -> phy:1).
-    spp > nfv 2; patch phy:1 phy:0
-    Patch ports (phy:1 -> phy:0).
-    spp > nfv 1; forward
-    Start forwarding.
-    spp > nfv 2; forward
-    Start forwarding.
-
-.. _figure_spp_two_nfv_as_l2fwd:
-
-.. figure:: ../images/setup/use_cases/spp_two_nfv_l2fwd.*
-   :width: 52%
-
-   Two spp_nfv as l2fwd
-
-
-Configure two spp_nfv for Loopback
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Patch ``phy:0`` to ``phy:0`` for ``nfv 1`` and
-``phy:1`` to ``phy:1`` for ``nfv 2`` for loopback.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 phy:0
-    Patch ports (phy:0 -> phy:0).
-    spp > nfv 2; patch phy:1 phy:1
-    Patch ports (phy:1 -> phy:1).
-    spp > nfv 1; forward
-    Start forwarding.
-    spp > nfv 2; forward
-    Start forwarding.
-
-.. _figure_spp_two_nfv_loopback:
-
-.. figure:: ../images/setup/use_cases/spp_two_nfv_loopback.*
-   :width: 52%
-
-   Two spp_nfv for loopback
-
-
-Dual spp_nfv with Ring PMD
---------------------------
-
-In this usecase, configure two senarios by using ring PMD.
-
-- Uni-Directional L2fwd
-- Bi-Directional L2fwd
-
-Ring PMD
-~~~~~~~~
-
-Ring PMD is an interface for communicating between secondaries on host.
-The maximum number of ring PMDs is defined as ``-n``  option of
-``spp_primary`` and ring ID is started from 0.
-
-Ring PMD is added by using ``add`` subcommand.
-All of ring PMDs is showed with ``status`` subcommand.
-
-.. code-block:: console
-
-    spp > nfv 1; add ring:0
-    Add ring:0.
-    spp > nfv 1; status
-    - status: idling
-    - ports:
-      - phy:0
-      - phy:1
-      - ring:0
-
-Notice that ``ring:0`` is added to ``nfv 1``.
-You can delete it with ``del`` command if you do not need to
-use it anymore.
-
-.. code-block:: console
-
-    spp > nfv 1; del ring:0
-    Delete ring:0.
-    spp > nfv 1; status
-    - status: idling
-    - ports:
-      - phy:0
-      - phy:1
-
-
-Uni-Directional L2fwd
-~~~~~~~~~~~~~~~~~~~~~
-
-Add a ring PMD and connect two ``spp_nvf`` processes.
-To configure network path, add ``ring:0`` to ``nfv 1`` and ``nfv 2``.
-Then, connect it with ``patch`` subcommand.
-
-.. code-block:: console
-
-    spp > nfv 1; add ring:0
-    Add ring:0.
-    spp > nfv 2; add ring:0
-    Add ring:0.
-    spp > nfv 1; patch phy:0 ring:0
-    Patch ports (phy:0 -> ring:0).
-    spp > nfv 2; patch ring:0 phy:1
-    Patch ports (ring:0 -> phy:1).
-    spp > nfv 1; forward
-    Start forwarding.
-    spp > nfv 2; forward
-    Start forwarding.
-
-.. _figure_spp_uni_directional_l2fwd:
-
-.. figure:: ../images/setup/use_cases/spp_unidir_l2fwd.*
-   :width: 52%
-
-   Uni-Directional l2fwd
-
-
-Bi-Directional L2fwd
-~~~~~~~~~~~~~~~~~~~~
-
-Add two ring PMDs to two ``spp_nvf`` processes.
-For bi-directional forwarding,
-patch ``ring:0`` for a path from ``nfv 1`` to ``nfv 2``
-and ``ring:1`` for another path from ``nfv 2`` to ``nfv 1``.
-
-First, add ``ring:0`` and ``ring:1`` to ``nfv 1``.
-
-.. code-block:: console
-
-    spp > nfv 1; add ring:0
-    Add ring:0.
-    spp > nfv 1; add ring:1
-    Add ring:1.
-    spp > nfv 1; status
-    - status: idling
-    - ports:
-      - phy:0
-      - phy:1
-      - ring:0
-      - ring:1
-
-Then, add ``ring:0`` and ``ring:1`` to ``nfv 2``.
-
-.. code-block:: console
-
-    spp > nfv 2; add ring:0
-    Add ring:0.
-    spp > nfv 2; add ring:1
-    Add ring:1.
-    spp > nfv 2; status
-    - status: idling
-    - ports:
-      - phy:0
-      - phy:1
-      - ring:0
-      - ring:1
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 ring:0
-    Patch ports (phy:0 -> ring:0).
-    spp > nfv 1; patch ring:1 phy:0
-    Patch ports (ring:1 -> phy:0).
-    spp > nfv 2; patch phy:1 ring:1
-    Patch ports (phy:1 -> ring:0).
-    spp > nfv 2; patch ring:0 phy:1
-    Patch ports (ring:0 -> phy:1).
-    spp > nfv 1; forward
-    Start forwarding.
-    spp > nfv 2; forward
-    Start forwarding.
-
-.. _figure_spp_bi_directional_l2fwd:
-
-.. figure:: ../images/setup/use_cases/spp_bidir_l2fwd.*
-   :width: 52%
-
-   Bi-Directional l2fwd
-
-
-Single spp_nfv with Vhost PMD
------------------------------
-
-Vhost PMD
-~~~~~~~~~
-
-Vhost PMD is an interface for communicating between on hsot and guest VM.
-As described in
-:doc:`How to Use<howto_use>`,
-vhost must be created by ``add`` subcommand before the VM is launched.
-
-
-Setup Vhost PMD
-~~~~~~~~~~~~~~~
-
-In this usecase, add ``vhost:0`` to ``nfv 1`` for communicating
-with the VM.
-First, check if ``/tmp/sock0`` is already exist.
-You should remove it already exist to avoid a failure of socket file
-creation.
-
-.. code-block:: console
-
-    $ ls /tmp | grep sock
-    sock0 ...
-
-    # remove it if exist
-    $ sudo rm /tmp/sock0
-
-Create ``/tmp/sock0`` from ``nfv 1``.
-
-.. code-block:: console
-
-    spp > nfv 1; add vhost:0
-    Add vhost:0.
-
-
-.. _usecase_unidir_l2fwd_vhost:
-
-Uni-Directional L2fwd with Vhost PMD
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Launch a VM by using the vhost interface created as previous step.
-Lauunching VM is described in
-:doc:`How to Use<howto_use>`
-and launch ``spp_vm`` with secondary ID 2.
-You find ``nfv 2`` from controller after launched.
-
-Patch ``phy:0`` and ``phy:1`` to ``vhost:0`` with ``nfv 1``
-running on host.
-Inside VM, configure loopback by patching ``phy:0`` and ``phy:0``
-with ``nfv 2``.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:0 vhost:0
-    Patch ports (phy:0 -> vhost:0).
-    spp > nfv 1; patch vhost:0 phy:1
-    Patch ports (vhost:0 -> phy:1).
-    spp > nfv 2; patch phy:0 phy:0
-    Patch ports (phy:0 -> phy:0).
-    spp > nfv 1; forward
-    Start forwarding.
-    spp > nfv 2; forward
-    Start forwarding.
-
-.. _figure_spp_uni_directional_l2fwd_vhost:
-
-.. figure:: ../images/setup/use_cases/spp_unidir_l2fwd_vhost.*
-   :width: 52%
-
-   Uni-Directional l2fwd with vhost
-
-Single spp_nfv with PCAP PMD
------------------------------
-
-PCAP PMD
-~~~~~~~~
-
-Pcap PMD is an interface for capturing or restoring traffic.
-For usign pcap PMD, you should set ``CONFIG_RTE_LIBRTE_PMD_PCAP``
-and ``CONFIG_RTE_PORT_PCAP`` to ``y`` and compile DPDK before SPP.
-Refer to
-:ref:`Install DPDK and SPP<install_dpdk_spp>`
-for details of setting up.
-
-Pcap PMD has two different streams for rx and tx.
-Tx device is for capturing packets and rx is for restoring captured
-packets.
-For rx device, you can use any of pcap files other than SPP's pcap PMD.
-
-To start using pcap pmd, just using ``add`` subcommand as ring.
-Here is an example for creating pcap PMD ``pcap:1``.
-
-.. code-block:: console
-
-    spp > nfv 1; add pcap:1
-
-After running it, you can find two of pcap files in ``/tmp``.
-
-.. code-block:: console
-
-    $ ls /tmp | grep pcap$
-    spp-rx1.pcap
-    spp-tx1.pcap
-
-If you already have a dumped file, you can use it by it putting as
-``/tmp/spp-rx1.pcap`` before running the ``add`` subcommand.
-SPP does not overwrite rx pcap file if it already exist,
-and it just overwrites tx pcap file.
-
-Capture Incoming Packets
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-As the first usecase, add a pcap PMD and capture incoming packets from
-``phy:0``.
-
-.. code-block:: console
-
-    spp > nfv 1; add pcap 1
-    Add pcap:1.
-    spp > nfv 1; patch phy:0 pcap:1
-    Patch ports (phy:0 -> pcap:1).
-    spp > nfv 1; forward
-    Start forwarding.
-
-.. _figure_spp_pcap_incoming:
-
-.. figure:: ../images/setup/use_cases/spp_pcap_incoming.*
-   :width: 50%
-
-   Rapture incoming packets
-
-In this example, we use pktgen.
-Once you start forwarding packets from pktgen, you can see
-that the size of ``/tmp/spp-tx1.pcap`` is increased rapidly
-(or gradually, it depends on the rate).
-
-.. code-block:: console
-
-    Pktgen:/> set 0 size 1024
-    Pktgen:/> start 0
-
-To stop capturing, simply stop forwarding of ``spp_nfv``.
-
-.. code-block:: console
-
-    spp > nfv 1; stop
-    Stop forwarding.
-
-You can analyze the dumped pcap file with other tools like as wireshark.
-
-Restore dumped Packets
-~~~~~~~~~~~~~~~~~~~~~~
-
-In this usecase, use dumped file in previsou section.
-Copy ``spp-tx1.pcap`` to ``spp-rx2.pcap`` first.
-
-.. code-block:: console
-
-    $ sudo cp /tmp/spp-tx1.pcap /tmp/spp-rx2.pcap
-
-Then, add pcap PMD ``pcap:2`` to another ``spp_nfv``.
-
-.. code-block:: console
-
-    spp > nfv 2; add pcap:2
-    Add pcap:2.
-
-.. _figure_spp_pcap_restoring:
-
-.. figure:: ../images/setup/use_cases/spp_pcap_restoring.*
-   :width: 52%
-
-   Restore dumped packets
-
-You can find that ``spp-tx2.pcap`` is creaeted and ``spp-rx2.pcap``
-still remained.
-
-.. code-block:: console
-
-    $ ls -al /tmp/spp*.pcap
-    -rw-r--r-- 1 root root         24  ...  /tmp/spp-rx1.pcap
-    -rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-rx2.pcap
-    -rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-tx1.pcap
-    -rw-r--r-- 1 root root          0  ...  /tmp/spp-tx2.pcap
-
-To confirm packets are restored, patch ``pcap:2`` to ``phy:1``
-and watch received packets on pktgen.
-
-.. code-block:: console
-
-    spp > nfv 2; patch pcap:2 phy:1
-    Patch ports (pcap:2 -> phy:1).
-    spp > nfv 2; forward
-    Start forwarding.
-
-After started forwarding, you can see that packet count is increased.
-
-
-Multiple Nodes
---------------
-
-SPP provides multi-node support for configuring network across several nodes
-from SPP CLI. You can configure each of nodes step by step.
-
-In :numref:`figure_spp_multi_nodes_vhost`, there are four nodes on which
-SPP and service VMs are running. Host1 behaves as a patch panel for connecting
-between other nodes. A request is sent from a VM on host2 to a VM on host3 or
-host4. Host4 is a backup server for host3 and replaced with host3 by changing
-network configuration. Blue lines are paths for host3 and red lines are for
-host4, and changed alternatively.
-
-.. _figure_spp_multi_nodes_vhost:
-
-.. figure:: ../images/setup/use_cases/spp_multi_nodes_vhost.*
-   :width: 100%
-
-   Multiple nodes example
-
-Launch SPP on Multiple Nodes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Before SPP CLI, launch spp-ctl on each of nodes. You should give IP address
-with ``-b`` option to be accessed from outside of the node.
-This is an example for launching spp-ctl on host1.
-
-.. code-block:: console
-
-    # Launch on host1
-    $ python3 src/spp-ctl/spp-ctl -b 192.168.11.101
-
-You also need to launch it on host2, host3 and host4 in each of terminals.
-
-After all of spp-ctls are lauched, launch SPP CLI with four ``-b`` options
-for each of hosts. SPP CLI is able to be launched on any of nodes.
-
-.. code-block:: console
-
-    # Launch SPP CLI
-    $ python src/spp.py -b 192.168.11.101 \
-        -b 192.168.11.102 \
-        -b 192.168.11.103 \
-        -b 192.168.11.104 \
-
-If you succeeded to launch all of processes before, you can find them
-by running ``sever list`` command.
-
-.. code-block:: console
-
-    # Launch SPP CLI
-    spp > server list
-      1: 192.168.1.101:7777 *
-      2: 192.168.1.102:7777
-      3: 192.168.1.103:7777
-      4: 192.168.1.104:7777
-
-You might notice that first entry is marked with ``*``. It means that
-the current node under the management is the first node.
-
-Switch Nodes
-~~~~~~~~~~~~
-
-SPP CLI manages a node marked with ``*``. If you configure other nodes,
-change the managed node with ``server`` command.
-Here is an example to switch to third node.
-
-.. code-block:: console
-
-    # Launch SPP CLI
-    spp > server 3
-    Switch spp-ctl to "3: 192.168.1.103:7777".
-
-And the result after changed to host3.
-
-.. code-block:: console
-
-    spp > server list
-      1: 192.168.1.101:7777
-      2: 192.168.1.102:7777
-      3: 192.168.1.103:7777 *
-      4: 192.168.1.104:7777
-
-You can also confirm this change by checking IP address of spp-ctl from
-``status`` command.
-
-.. code-block:: console
-
-    spp > status
-    - spp-ctl:
-      - address: 192.168.1.103:7777
-    - primary:
-      - status: not running
-    ...
-
-Configure Patch Panel Node
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-First of all of the network configuration, setup blue lines on host1
-described in :numref:`figure_spp_multi_nodes_vhost`.
-You should confirm the managed server is host1.
-
-.. code-block:: console
-
-    spp > server list
-      1: 192.168.1.101:7777 *
-      2: 192.168.1.102:7777
-      ...
-
-Patch two sets of physical ports and start forwarding.
-
-.. code-block:: console
-
-    spp > nfv 1; patch phy:1 phy:2
-    Patch ports (phy:1 -> phy:2).
-    spp > nfv 1; patch phy:3 phy:0
-    Patch ports (phy:3 -> phy:0).
-    spp > nfv 1; forward
-    Start forwarding.
-
-Configure Service VM Nodes
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-It is almost similar as
-:ref:`Uni-Directional L2fwd with Vhost PMD<usecase_unidir_l2fwd_vhost>`.
-to setup for host2, host3, and host4.
-
-For host2, swith server to host2 and run nfv commands.
-
-.. code-block:: console
-
-    # switch to server 2
-    spp > server 2
-    Switch spp-ctl to "2: 192.168.1.102:7777".
-
-    # configure
-    spp > nfv 1; patch phy:0 vhost:0
-    Patch ports (phy:0 -> vhost:0).
-    spp > nfv 1; patch vhost:0 phy:1
-    Patch ports (vhost:0 -> phy:1).
-    spp > nfv 1; forward
-    Start forwarding.
-
-Then, swith to host3 and host4 for doing the same configuration.
-
-Change Path to Backup Node
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Finally, change path from blue lines to red lines.
-
-.. code-block:: console
-
-    # switch to server 1
-    spp > server 2
-    Switch spp-ctl to "2: 192.168.1.102:7777".
-
-    # remove blue path
-    spp > nfv 1; stop
-    Stop forwarding.
-    spp > nfv 1; patch reset
-
-    # configure red path
-    spp > nfv 2; patch phy:1 phy:4
-    Patch ports (phy:1 -> phy:4).
-    spp > nfv 2; patch phy:5 phy:0
-    Patch ports (phy:5 -> phy:0).
-    spp > nfv 2; forward
-    Start forwarding.
diff --git a/docs/guides/use_cases/index.rst b/docs/guides/use_cases/index.rst
new file mode 100644
index 0000000..c41dc2e
--- /dev/null
+++ b/docs/guides/use_cases/index.rst
@@ -0,0 +1,11 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2019 Nippon Telegraph and Telephone Corporation
+
+Use Cases
+=========
+
+.. toctree::
+   :maxdepth: 1
+   :numbered:
+
+   spp_nfv
diff --git a/docs/guides/use_cases/spp_nfv.rst b/docs/guides/use_cases/spp_nfv.rst
new file mode 100644
index 0000000..d9d922c
--- /dev/null
+++ b/docs/guides/use_cases/spp_nfv.rst
@@ -0,0 +1,671 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation
+    Copyright(c) 2017-2019 Nippon Telegraph and Telephone Corporation
+
+spp_nfv
+=======
+
+.. _single_spp_nfv:
+
+Single spp_nfv
+--------------
+
+The most simple usecase mainly for testing performance of packet
+forwarding on host.
+One ``spp_nfv`` and two physical ports.
+
+In this usecase, try to configure two senarios.
+
+- Configure ``spp_nfv`` as L2fwd
+- Configure ``spp_nfv`` for Loopback
+
+
+First of all, Check the status of ``spp_nfv`` from SPP CLI.
+
+.. code-block:: console
+
+    spp > nfv 1; status
+    - status: idling
+    - ports:
+      - phy:0
+      - phy:1
+
+This status message explains that ``nfv 1`` has two physical ports.
+
+
+Configure spp_nfv as L2fwd
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Assing the destination of ports with ``patch`` subcommand and
+start forwarding.
+Patch from ``phy:0`` to ``phy:1`` and ``phy:1`` to ``phy:0``,
+which means it is bi-directional connection.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 phy:1
+    Patch ports (phy:0 -> phy:1).
+    spp > nfv 1; patch phy:1 phy:0
+    Patch ports (phy:1 -> phy:0).
+    spp > nfv 1; forward
+    Start forwarding.
+
+Confirm that status of ``nfv 1`` is updated to ``running`` and ports are
+patches as you defined.
+
+.. code-block:: console
+
+    spp > nfv 1; status
+    - status: running
+    - ports:
+      - phy:0 -> phy:1
+      - phy:1 -> phy:0
+
+.. _figure_spp_nfv_as_l2fwd:
+
+.. figure:: ../images/setup/use_cases/spp_nfv_l2fwd.*
+   :width: 52%
+
+   spp_nfv as l2fwd
+
+
+Stop forwarding and reset patch to clear configuration.
+``patch reset`` is to clear all of patch configurations.
+
+.. code-block:: console
+
+    spp > nfv 1; stop
+    Stop forwarding.
+    spp > nfv 1; patch reset
+    Clear all of patches.
+
+
+Configure spp_nfv for Loopback
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Patch ``phy:0`` to ``phy:0`` and ``phy:1`` to ``phy:1``
+for loopback.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 phy:0
+    Patch ports (phy:0 -> phy:0).
+    spp > nfv 1; patch phy:1 phy:1
+    Patch ports (phy:1 -> phy:1).
+    spp > nfv 1; forward
+    Start forwarding.
+
+
+Dual spp_nfv
+------------
+
+Use case for testing performance of packet forwarding
+with two ``spp_nfv`` on host.
+Throughput is expected to be better than
+:ref:`Single spp_nfv<single_spp_nfv>`
+usecase because bi-directional forwarding of single ``spp_nfv`` is shared
+with two of uni-directional forwarding between dual ``spp_nfv``.
+
+In this usecase, configure two senarios almost similar to previous section.
+
+- Configure Two ``spp_nfv`` as L2fwd
+- Configure Two ``spp_nfv`` for Loopback
+
+
+Configure Two spp_nfv as L2fwd
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Assing the destination of ports with ``patch`` subcommand and
+start forwarding.
+Patch from ``phy:0`` to ``phy:1`` for ``nfv 1`` and
+from ``phy:1`` to ``phy:0`` for ``nfv 2``.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 phy:1
+    Patch ports (phy:0 -> phy:1).
+    spp > nfv 2; patch phy:1 phy:0
+    Patch ports (phy:1 -> phy:0).
+    spp > nfv 1; forward
+    Start forwarding.
+    spp > nfv 2; forward
+    Start forwarding.
+
+.. _figure_spp_two_nfv_as_l2fwd:
+
+.. figure:: ../images/setup/use_cases/spp_two_nfv_l2fwd.*
+   :width: 52%
+
+   Two spp_nfv as l2fwd
+
+
+Configure two spp_nfv for Loopback
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Patch ``phy:0`` to ``phy:0`` for ``nfv 1`` and
+``phy:1`` to ``phy:1`` for ``nfv 2`` for loopback.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 phy:0
+    Patch ports (phy:0 -> phy:0).
+    spp > nfv 2; patch phy:1 phy:1
+    Patch ports (phy:1 -> phy:1).
+    spp > nfv 1; forward
+    Start forwarding.
+    spp > nfv 2; forward
+    Start forwarding.
+
+.. _figure_spp_two_nfv_loopback:
+
+.. figure:: ../images/setup/use_cases/spp_two_nfv_loopback.*
+   :width: 52%
+
+   Two spp_nfv for loopback
+
+
+Dual spp_nfv with Ring PMD
+--------------------------
+
+In this usecase, configure two senarios by using ring PMD.
+
+- Uni-Directional L2fwd
+- Bi-Directional L2fwd
+
+Ring PMD
+~~~~~~~~
+
+Ring PMD is an interface for communicating between secondaries on host.
+The maximum number of ring PMDs is defined as ``-n``  option of
+``spp_primary`` and ring ID is started from 0.
+
+Ring PMD is added by using ``add`` subcommand.
+All of ring PMDs is showed with ``status`` subcommand.
+
+.. code-block:: console
+
+    spp > nfv 1; add ring:0
+    Add ring:0.
+    spp > nfv 1; status
+    - status: idling
+    - ports:
+      - phy:0
+      - phy:1
+      - ring:0
+
+Notice that ``ring:0`` is added to ``nfv 1``.
+You can delete it with ``del`` command if you do not need to
+use it anymore.
+
+.. code-block:: console
+
+    spp > nfv 1; del ring:0
+    Delete ring:0.
+    spp > nfv 1; status
+    - status: idling
+    - ports:
+      - phy:0
+      - phy:1
+
+
+Uni-Directional L2fwd
+~~~~~~~~~~~~~~~~~~~~~
+
+Add a ring PMD and connect two ``spp_nvf`` processes.
+To configure network path, add ``ring:0`` to ``nfv 1`` and ``nfv 2``.
+Then, connect it with ``patch`` subcommand.
+
+.. code-block:: console
+
+    spp > nfv 1; add ring:0
+    Add ring:0.
+    spp > nfv 2; add ring:0
+    Add ring:0.
+    spp > nfv 1; patch phy:0 ring:0
+    Patch ports (phy:0 -> ring:0).
+    spp > nfv 2; patch ring:0 phy:1
+    Patch ports (ring:0 -> phy:1).
+    spp > nfv 1; forward
+    Start forwarding.
+    spp > nfv 2; forward
+    Start forwarding.
+
+.. _figure_spp_uni_directional_l2fwd:
+
+.. figure:: ../images/setup/use_cases/spp_unidir_l2fwd.*
+   :width: 52%
+
+   Uni-Directional l2fwd
+
+
+Bi-Directional L2fwd
+~~~~~~~~~~~~~~~~~~~~
+
+Add two ring PMDs to two ``spp_nvf`` processes.
+For bi-directional forwarding,
+patch ``ring:0`` for a path from ``nfv 1`` to ``nfv 2``
+and ``ring:1`` for another path from ``nfv 2`` to ``nfv 1``.
+
+First, add ``ring:0`` and ``ring:1`` to ``nfv 1``.
+
+.. code-block:: console
+
+    spp > nfv 1; add ring:0
+    Add ring:0.
+    spp > nfv 1; add ring:1
+    Add ring:1.
+    spp > nfv 1; status
+    - status: idling
+    - ports:
+      - phy:0
+      - phy:1
+      - ring:0
+      - ring:1
+
+Then, add ``ring:0`` and ``ring:1`` to ``nfv 2``.
+
+.. code-block:: console
+
+    spp > nfv 2; add ring:0
+    Add ring:0.
+    spp > nfv 2; add ring:1
+    Add ring:1.
+    spp > nfv 2; status
+    - status: idling
+    - ports:
+      - phy:0
+      - phy:1
+      - ring:0
+      - ring:1
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 ring:0
+    Patch ports (phy:0 -> ring:0).
+    spp > nfv 1; patch ring:1 phy:0
+    Patch ports (ring:1 -> phy:0).
+    spp > nfv 2; patch phy:1 ring:1
+    Patch ports (phy:1 -> ring:0).
+    spp > nfv 2; patch ring:0 phy:1
+    Patch ports (ring:0 -> phy:1).
+    spp > nfv 1; forward
+    Start forwarding.
+    spp > nfv 2; forward
+    Start forwarding.
+
+.. _figure_spp_bi_directional_l2fwd:
+
+.. figure:: ../images/setup/use_cases/spp_bidir_l2fwd.*
+   :width: 52%
+
+   Bi-Directional l2fwd
+
+
+Single spp_nfv with Vhost PMD
+-----------------------------
+
+Vhost PMD
+~~~~~~~~~
+
+Vhost PMD is an interface for communicating between on hsot and guest VM.
+As described in
+:ref:`How to Use<spp_setup_howto_use>`,
+vhost must be created by ``add`` subcommand before the VM is launched.
+
+
+Setup Vhost PMD
+~~~~~~~~~~~~~~~
+
+In this usecase, add ``vhost:0`` to ``nfv 1`` for communicating
+with the VM.
+First, check if ``/tmp/sock0`` is already exist.
+You should remove it already exist to avoid a failure of socket file
+creation.
+
+.. code-block:: console
+
+    $ ls /tmp | grep sock
+    sock0 ...
+
+    # remove it if exist
+    $ sudo rm /tmp/sock0
+
+Create ``/tmp/sock0`` from ``nfv 1``.
+
+.. code-block:: console
+
+    spp > nfv 1; add vhost:0
+    Add vhost:0.
+
+
+.. _usecase_unidir_l2fwd_vhost:
+
+Uni-Directional L2fwd with Vhost PMD
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Launch a VM by using the vhost interface created as previous step.
+Lauunching VM is described in
+:ref:`How to Use<spp_setup_howto_use>`
+and launch ``spp_vm`` with secondary ID 2.
+You find ``nfv 2`` from controller after launched.
+
+Patch ``phy:0`` and ``phy:1`` to ``vhost:0`` with ``nfv 1``
+running on host.
+Inside VM, configure loopback by patching ``phy:0`` and ``phy:0``
+with ``nfv 2``.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:0 vhost:0
+    Patch ports (phy:0 -> vhost:0).
+    spp > nfv 1; patch vhost:0 phy:1
+    Patch ports (vhost:0 -> phy:1).
+    spp > nfv 2; patch phy:0 phy:0
+    Patch ports (phy:0 -> phy:0).
+    spp > nfv 1; forward
+    Start forwarding.
+    spp > nfv 2; forward
+    Start forwarding.
+
+.. _figure_spp_uni_directional_l2fwd_vhost:
+
+.. figure:: ../images/setup/use_cases/spp_unidir_l2fwd_vhost.*
+   :width: 52%
+
+   Uni-Directional l2fwd with vhost
+
+Single spp_nfv with PCAP PMD
+-----------------------------
+
+PCAP PMD
+~~~~~~~~
+
+Pcap PMD is an interface for capturing or restoring traffic.
+For usign pcap PMD, you should set ``CONFIG_RTE_LIBRTE_PMD_PCAP``
+and ``CONFIG_RTE_PORT_PCAP`` to ``y`` and compile DPDK before SPP.
+Refer to
+:ref:`Install DPDK and SPP<install_dpdk_spp>`
+for details of setting up.
+
+Pcap PMD has two different streams for rx and tx.
+Tx device is for capturing packets and rx is for restoring captured
+packets.
+For rx device, you can use any of pcap files other than SPP's pcap PMD.
+
+To start using pcap pmd, just using ``add`` subcommand as ring.
+Here is an example for creating pcap PMD ``pcap:1``.
+
+.. code-block:: console
+
+    spp > nfv 1; add pcap:1
+
+After running it, you can find two of pcap files in ``/tmp``.
+
+.. code-block:: console
+
+    $ ls /tmp | grep pcap$
+    spp-rx1.pcap
+    spp-tx1.pcap
+
+If you already have a dumped file, you can use it by it putting as
+``/tmp/spp-rx1.pcap`` before running the ``add`` subcommand.
+SPP does not overwrite rx pcap file if it already exist,
+and it just overwrites tx pcap file.
+
+Capture Incoming Packets
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+As the first usecase, add a pcap PMD and capture incoming packets from
+``phy:0``.
+
+.. code-block:: console
+
+    spp > nfv 1; add pcap 1
+    Add pcap:1.
+    spp > nfv 1; patch phy:0 pcap:1
+    Patch ports (phy:0 -> pcap:1).
+    spp > nfv 1; forward
+    Start forwarding.
+
+.. _figure_spp_pcap_incoming:
+
+.. figure:: ../images/setup/use_cases/spp_pcap_incoming.*
+   :width: 50%
+
+   Rapture incoming packets
+
+In this example, we use pktgen.
+Once you start forwarding packets from pktgen, you can see
+that the size of ``/tmp/spp-tx1.pcap`` is increased rapidly
+(or gradually, it depends on the rate).
+
+.. code-block:: console
+
+    Pktgen:/> set 0 size 1024
+    Pktgen:/> start 0
+
+To stop capturing, simply stop forwarding of ``spp_nfv``.
+
+.. code-block:: console
+
+    spp > nfv 1; stop
+    Stop forwarding.
+
+You can analyze the dumped pcap file with other tools like as wireshark.
+
+Restore dumped Packets
+~~~~~~~~~~~~~~~~~~~~~~
+
+In this usecase, use dumped file in previsou section.
+Copy ``spp-tx1.pcap`` to ``spp-rx2.pcap`` first.
+
+.. code-block:: console
+
+    $ sudo cp /tmp/spp-tx1.pcap /tmp/spp-rx2.pcap
+
+Then, add pcap PMD ``pcap:2`` to another ``spp_nfv``.
+
+.. code-block:: console
+
+    spp > nfv 2; add pcap:2
+    Add pcap:2.
+
+.. _figure_spp_pcap_restoring:
+
+.. figure:: ../images/setup/use_cases/spp_pcap_restoring.*
+   :width: 52%
+
+   Restore dumped packets
+
+You can find that ``spp-tx2.pcap`` is creaeted and ``spp-rx2.pcap``
+still remained.
+
+.. code-block:: console
+
+    $ ls -al /tmp/spp*.pcap
+    -rw-r--r-- 1 root root         24  ...  /tmp/spp-rx1.pcap
+    -rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-rx2.pcap
+    -rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-tx1.pcap
+    -rw-r--r-- 1 root root          0  ...  /tmp/spp-tx2.pcap
+
+To confirm packets are restored, patch ``pcap:2`` to ``phy:1``
+and watch received packets on pktgen.
+
+.. code-block:: console
+
+    spp > nfv 2; patch pcap:2 phy:1
+    Patch ports (pcap:2 -> phy:1).
+    spp > nfv 2; forward
+    Start forwarding.
+
+After started forwarding, you can see that packet count is increased.
+
+
+Multiple Nodes
+--------------
+
+SPP provides multi-node support for configuring network across several nodes
+from SPP CLI. You can configure each of nodes step by step.
+
+In :numref:`figure_spp_multi_nodes_vhost`, there are four nodes on which
+SPP and service VMs are running. Host1 behaves as a patch panel for connecting
+between other nodes. A request is sent from a VM on host2 to a VM on host3 or
+host4. Host4 is a backup server for host3 and replaced with host3 by changing
+network configuration. Blue lines are paths for host3 and red lines are for
+host4, and changed alternatively.
+
+.. _figure_spp_multi_nodes_vhost:
+
+.. figure:: ../images/setup/use_cases/spp_multi_nodes_vhost.*
+   :width: 100%
+
+   Multiple nodes example
+
+Launch SPP on Multiple Nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before SPP CLI, launch spp-ctl on each of nodes. You should give IP address
+with ``-b`` option to be accessed from outside of the node.
+This is an example for launching spp-ctl on host1.
+
+.. code-block:: console
+
+    # Launch on host1
+    $ python3 src/spp-ctl/spp-ctl -b 192.168.11.101
+
+You also need to launch it on host2, host3 and host4 in each of terminals.
+
+After all of spp-ctls are lauched, launch SPP CLI with four ``-b`` options
+for each of hosts. SPP CLI is able to be launched on any of nodes.
+
+.. code-block:: console
+
+    # Launch SPP CLI
+    $ python src/spp.py -b 192.168.11.101 \
+        -b 192.168.11.102 \
+        -b 192.168.11.103 \
+        -b 192.168.11.104 \
+
+If you succeeded to launch all of processes before, you can find them
+by running ``sever list`` command.
+
+.. code-block:: console
+
+    # Launch SPP CLI
+    spp > server list
+      1: 192.168.1.101:7777 *
+      2: 192.168.1.102:7777
+      3: 192.168.1.103:7777
+      4: 192.168.1.104:7777
+
+You might notice that first entry is marked with ``*``. It means that
+the current node under the management is the first node.
+
+Switch Nodes
+~~~~~~~~~~~~
+
+SPP CLI manages a node marked with ``*``. If you configure other nodes,
+change the managed node with ``server`` command.
+Here is an example to switch to third node.
+
+.. code-block:: console
+
+    # Launch SPP CLI
+    spp > server 3
+    Switch spp-ctl to "3: 192.168.1.103:7777".
+
+And the result after changed to host3.
+
+.. code-block:: console
+
+    spp > server list
+      1: 192.168.1.101:7777
+      2: 192.168.1.102:7777
+      3: 192.168.1.103:7777 *
+      4: 192.168.1.104:7777
+
+You can also confirm this change by checking IP address of spp-ctl from
+``status`` command.
+
+.. code-block:: console
+
+    spp > status
+    - spp-ctl:
+      - address: 192.168.1.103:7777
+    - primary:
+      - status: not running
+    ...
+
+Configure Patch Panel Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+First of all of the network configuration, setup blue lines on host1
+described in :numref:`figure_spp_multi_nodes_vhost`.
+You should confirm the managed server is host1.
+
+.. code-block:: console
+
+    spp > server list
+      1: 192.168.1.101:7777 *
+      2: 192.168.1.102:7777
+      ...
+
+Patch two sets of physical ports and start forwarding.
+
+.. code-block:: console
+
+    spp > nfv 1; patch phy:1 phy:2
+    Patch ports (phy:1 -> phy:2).
+    spp > nfv 1; patch phy:3 phy:0
+    Patch ports (phy:3 -> phy:0).
+    spp > nfv 1; forward
+    Start forwarding.
+
+Configure Service VM Nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is almost similar as
+:ref:`Uni-Directional L2fwd with Vhost PMD<usecase_unidir_l2fwd_vhost>`.
+to setup for host2, host3, and host4.
+
+For host2, swith server to host2 and run nfv commands.
+
+.. code-block:: console
+
+    # switch to server 2
+    spp > server 2
+    Switch spp-ctl to "2: 192.168.1.102:7777".
+
+    # configure
+    spp > nfv 1; patch phy:0 vhost:0
+    Patch ports (phy:0 -> vhost:0).
+    spp > nfv 1; patch vhost:0 phy:1
+    Patch ports (vhost:0 -> phy:1).
+    spp > nfv 1; forward
+    Start forwarding.
+
+Then, swith to host3 and host4 for doing the same configuration.
+
+Change Path to Backup Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Finally, change path from blue lines to red lines.
+
+.. code-block:: console
+
+    # switch to server 1
+    spp > server 2
+    Switch spp-ctl to "2: 192.168.1.102:7777".
+
+    # remove blue path
+    spp > nfv 1; stop
+    Stop forwarding.
+    spp > nfv 1; patch reset
+
+    # configure red path
+    spp > nfv 2; patch phy:1 phy:4
+    Patch ports (phy:1 -> phy:4).
+    spp > nfv 2; patch phy:5 phy:0
+    Patch ports (phy:5 -> phy:0).
+    spp > nfv 2; forward
+    Start forwarding.
-- 
2.7.4

  parent reply	other threads:[~2019-01-09  1:51 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-09  1:49 [spp] Update docs structure ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 01/11] docs: change structure of overview section ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 02/11] docs: move overview of spp-ctl to SPP overview ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 03/11] docs: update image of " ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 04/11] docs: update overview section ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 05/11] docs: add overview image for design section ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 06/11] docs: add image of design of spp-ctl ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 07/11] docs: add SPP controller in design section ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 08/11] docs: add spp_primary " ogawa.yasufumi
2019-01-09  1:49 ` [spp] [PATCH 09/11] docs: move design section to doc root ogawa.yasufumi
2019-01-09  1:49 ` ogawa.yasufumi [this message]
2019-01-09  1:49 ` [spp] [PATCH 11/11] docs: remove spp-ctl section ogawa.yasufumi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1546998573-26108-11-git-send-email-ogawa.yasufumi@lab.ntt.co.jp \
    --to=ogawa.yasufumi@lab.ntt.co.jp \
    --cc=ferruh.yigit@intel.com \
    --cc=spp@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).