Soft Patch Panel
 help / color / mirror / Atom feed
From: ogawa.yasufumi@lab.ntt.co.jp
To: spp@dpdk.org, ferruh.yigit@intel.com, ogawa.yasufumi@lab.ntt.co.jp
Subject: [spp] [PATCH 02/20] docs: divide getting started guide
Date: Mon, 18 Feb 2019 20:48:13 +0900	[thread overview]
Message-ID: <1550490511-31683-3-git-send-email-ogawa.yasufumi@lab.ntt.co.jp> (raw)
In-Reply-To: <1550490511-31683-1-git-send-email-ogawa.yasufumi@lab.ntt.co.jp>

From: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>

This update is to divide `Getting Started` into two parts because this
chapter could be large as hard to maintain after sections of SPP VF are
moved.

Signed-off-by: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>
---
 docs/guides/gsg/howto_use.rst              | 578 +++++++++++++++++++++++++++++
 docs/guides/gsg/index.rst                  |  14 +
 docs/guides/gsg/install.rst                | 282 ++++++++++++++
 docs/guides/gsg/performance_opt.rst        |  82 ++++
 docs/guides/gsg/setup.rst                  | 152 ++++++++
 docs/guides/index.rst                      |   2 +-
 docs/guides/setup/getting_started.rst      | 420 ---------------------
 docs/guides/setup/howto_use.rst            | 578 -----------------------------
 docs/guides/setup/index.rst                |  13 -
 docs/guides/setup/performance_opt.rst      |  82 ----
 docs/guides/tools/sppc/getting_started.rst |   4 +-
 docs/guides/use_cases/spp_nfv.rst          |   2 +-
 12 files changed, 1112 insertions(+), 1097 deletions(-)
 create mode 100644 docs/guides/gsg/howto_use.rst
 create mode 100644 docs/guides/gsg/index.rst
 create mode 100644 docs/guides/gsg/install.rst
 create mode 100644 docs/guides/gsg/performance_opt.rst
 create mode 100644 docs/guides/gsg/setup.rst
 delete mode 100644 docs/guides/setup/getting_started.rst
 delete mode 100644 docs/guides/setup/howto_use.rst
 delete mode 100644 docs/guides/setup/index.rst
 delete mode 100644 docs/guides/setup/performance_opt.rst

diff --git a/docs/guides/gsg/howto_use.rst b/docs/guides/gsg/howto_use.rst
new file mode 100644
index 0000000..cc3dd9b
--- /dev/null
+++ b/docs/guides/gsg/howto_use.rst
@@ -0,0 +1,578 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation
+
+.. _spp_setup_howto_use:
+
+How to Use
+==========
+
+As described in :ref:`Design<spp_overview_design>`, SPP consists of
+primary process for managing resources, secondary processes for
+forwarding packet, and SPP controller to accept user commands and
+send it to SPP processes.
+
+You should keep in mind the order of launching processes.
+Primary process must be launched before secondary processes.
+``spp-ctl`` need to be launched before SPP CLI, but no need to be launched
+before other processes. SPP CLI is launched from ``spp.py``.
+If ``spp-ctl`` is not running after primary and
+secondary processes are launched, processes wait ``spp-ctl`` is launched.
+
+In general, ``spp-ctl`` should be launched first, then SPP CLI and
+``spp_primary`` in each of terminals without running as background process.
+After ``spp_primary``, you launch secondary processes for your usage.
+If you just patch two DPDK applications on host, it is enough to use one
+``spp_nfv``, or use ``spp_vf`` if you need to classify packets.
+How to use of these secondary processes is described in next chapters.
+
+
+SPP Controller
+--------------
+
+SPP Controller consists of ``spp-ctl`` and SPP CLI.
+
+spp-ctl
+~~~~~~~
+
+``spp-ctl`` is a HTTP server for REST APIs for managing SPP
+processes. In default, it is accessed with URL ``http://127.0.0.1:7777``
+or ``http://localhost:7777``.
+``spp-ctl`` shows no messages at first after launched, but shows
+log messages for events such as receiving a request or terminating
+a process.
+
+.. code-block:: console
+
+    # terminal 1
+    $ cd /path/to/spp
+    $ python3 src/spp-ctl/spp-ctl
+
+Notice that ``spp-ctl`` is implemented in ``python3`` and cannot be
+launched with ``python2``.
+
+It has a option ``-b`` for binding address explicitly to be accessed
+from other than default, ``127.0.0.1`` or ``localhost``.
+If you deploy SPP on multiple nodes, you might need to use ``-b`` option
+it to be accessed from other processes running on other than local node.
+
+.. code-block:: console
+
+    # launch with URL http://192.168.1.100:7777
+    $ python3 src/spp-ctl/spp-ctl -b 192.168.1.100
+
+``spp-ctl`` is the most important process in SPP. For some usecases,
+you might better to manage this process with ``systemd``.
+Here is a simple example of service file for systemd.
+
+.. code-block:: none
+
+    [Unit]
+    Description = SPP Controller
+
+    [Service]
+    ExecStart = /usr/bin/python3 /path/to/spp/src/spp-ctl/spp-ctl
+    User = root
+
+All of options can be referred with help option ``-h``.
+
+.. code-block:: console
+
+    python3 ./src/spp-ctl/spp-ctl -h
+    usage: spp-ctl [-h] [-b BIND_ADDR] [-p PRI_PORT] [-s SEC_PORT] [-a API_PORT]
+
+    SPP Controller
+
+    optional arguments:
+      -h, --help            show this help message and exit
+      -b BIND_ADDR, --bind-addr BIND_ADDR
+                            bind address, default=localhost
+      -p PRI_PORT           primary port, default=5555
+      -s SEC_PORT           secondary port, default=6666
+      -a API_PORT           web api port, default=7777
+
+.. _spp_setup_howto_use_spp_cli:
+
+SPP CLI
+~~~~~~~
+
+If ``spp-ctl`` is launched, go to the next terminal and launch SPP CLI.
+It supports both of Python 2 and 3, so use ``python`` in this case.
+
+.. code-block:: console
+
+    # terminal 2
+    $ cd /path/to/spp
+    $ python src/spp.py
+    Welcome to the spp.   Type help or ? to list commands.
+
+    spp >
+
+If you launched ``spp-ctl`` with ``-b`` option, you also need to use the same
+option for ``spp.py``, or failed to connect and to launch.
+
+.. code-block:: console
+
+    # terminal 2
+    # bind to spp-ctl on http://192.168.1.100:7777
+    $ python src/spp.py -b 192.168.1.100
+    Welcome to the spp.   Type help or ? to list commands.
+
+    spp >
+
+One of the typical usecase of this option is to deploy multiple SPP nodes.
+:numref:`figure_spp_howto_multi_spp` is an exmaple of multiple nodes case.
+There are three nodes on each of which ``spp-ctl`` is running for accepting
+requests for SPP. These ``spp-ctl`` processes are controlled from
+``spp.py`` on host1 and all of paths are configured across the nodes.
+It is also able to be configured between hosts by changing
+soure or destination of phy ports.
+
+.. _figure_spp_howto_multi_spp:
+
+.. figure:: ../images/setup/howto_use/spp_howto_multi_spp.*
+   :width: 80%
+
+   Multiple SPP nodes
+
+Launch SPP CLI with three entries of binding addresses with ``-b`` option
+for specifying ``spp-ctl``.
+
+.. code-block:: console
+
+    # Launch SPP CLI with three nodes
+    $ python src/spp.py -b 192.168.11.101 \
+        -b 192.168.11.102 \
+        -b 192.168.11.103 \
+
+You can also add nodes after SPP CLI is launched.
+
+.. code-block:: console
+
+    # Launch SPP CLI with one node
+    $ python src/spp.py -b 192.168.11.101
+    Welcome to the SPP CLI. Type `help` or `?` to list commands.
+
+    # Add the rest of nodes after
+    spp > server add 192.168.11.102
+    Registered spp-ctl "192.168.11.102:7777".
+    spp > server add 192.168.11.103
+    Registered spp-ctl "192.168.11.103:7777".
+
+You find the host under the management of SPP CLI and switch with
+``server`` command.
+
+.. code-block:: none
+
+    spp > server list
+      1: 192.168.1.101:7777 *
+      2: 192.168.1.102:7777
+      3: 192.168.1.103:7777
+
+To change the server, add an index number after ``server``.
+
+.. code-block:: none
+
+    # Launch SPP CLI
+    spp > server 3
+    Switch spp-ctl to "3: 192.168.1.103:7777".
+
+All of options can be referred with help option ``-h``.
+
+.. code-block:: console
+
+    $ python src/spp.py -h
+    usage: spp.py [-h] [-b BIND_ADDR] [-a API_PORT]
+
+    SPP Controller
+
+    optional arguments:
+      -h, --help            show this help message and exit
+      -b BIND_ADDR, --bind-addr BIND_ADDR
+                            bind address, default=127.0.0.1
+      -a API_PORT, --api-port API_PORT
+                        bind address, default=777
+
+All of SPP CLI commands are described in :doc:`../../commands/index`.
+
+
+Default Configuration
+^^^^^^^^^^^^^^^^^^^^^
+
+SPP CLI imports several params from configuration file while launching.
+Some of behaviours of SPP CLI depends on the params.
+The default configuration is defined in
+``src/controller/config/default.yml``.
+You can change this params by editing the config file, or from ``config``
+command after SPP CLI is launched.
+
+All of config params are referred by ``config`` command.
+
+.. code-block:: none
+
+    # show list of config
+    spp > config
+    - max_secondary: "16"       # The maximum number of secondary processes
+    - sec_nfv_nof_lcores: "1"   # Default num of lcores for workers of spp_nfv
+    ....
+
+To change the config, set a value for the param.
+Here is an example for changing command prompt.
+
+.. code-block:: none
+
+    # set prompt to "$ spp "
+    spp > config prompt "$ spp "
+    Set prompt: "$ spp "
+    $ spp
+
+
+SPP Primary
+-----------
+
+SPP primary is a resource manager and has a responsibility for
+initializing EAL for secondary processes. It should be launched before
+secondary.
+
+To launch SPP primary, run ``spp_primary`` with specific options.
+
+.. code-block:: console
+
+    # terminal 3
+    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
+        -l 1 -n 4 \
+        --socket-mem 512,512 \
+        --huge-dir /dev/hugepages \
+        --proc-type primary \
+        --base-virtaddr 0x100000000
+        -- \
+        -p 0x03 \
+        -n 10 \
+        -s 192.168.1.100:5555
+
+SPP primary takes EAL options and application specific options.
+
+Core list option ``-l`` is for assigining cores and SPP primary requires just
+one core. You can use core mask option ``-c`` instead of ``-l``.
+You can use ``-m 1024`` for memory reservation instead of
+``--socket-mem 1024,0`` if you use single NUMA node. In this case, 512 MB is
+reserved on each of nodes.
+
+.. note::
+
+   If you use DPDK v18.08 or before,
+   you should consider give ``--base-virtaddr`` with 4 GiB or higher value
+   because a secondary process is accidentally failed to mmap while init
+   memory. The reason of the failure is secondary process tries to reserve
+   the region which is already used by some of thread of primary.
+
+   .. code-block:: console
+
+      # Failed to secondary
+      EAL: Could not mmap 17179869184 ... - please use '--base-virtaddr' option
+
+   ``--base-virtaddr`` is to decide base address explicitly to avoid this
+   overlapping. 4 GiB ``0x100000000`` is enough for the purpose.
+
+   If you use DPDK v18.11 or later, ``--base-virtaddr 0x100000000`` is enabled
+   in default. You need to use this option only for changing the default value.
+
+
+In general, one lcore is enough for ``spp_primary``. If you give two or
+more, it uses second lcore to display statistics periodically and does not
+use others.
+
+.. note::
+
+    Anyway, you can get statistics in SPP CLI with ``pri; status`` command
+    actually even if you give only one core.
+
+Primary process sets up physical ports of given port mask with ``-p`` option
+and ring ports of the number of ``-n`` option. Ports of  ``-p`` option is for
+accepting incomming packets and ``-n`` option is for inter-process packet
+forwarding. You can also add ports initialized with ``--vdev`` option to
+physical ports. However, ports added with ``--vdev`` cannot referred from
+secondary processes.
+
+.. code-block:: console
+
+    # terminal 3
+    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
+        -l 1 -n 4 \
+        --socket-mem 512,512 \
+        --huge-dir=/dev/hugepages \
+        --vdev eth_vhost1,iface=/tmp/sock1  # used as 1st phy port
+        --vdev eth_vhost2,iface=/tmp/sock2  # used as 2nd phy port
+        --proc-type=primary \
+        --base-virtaddr 0x100000000
+        -- \
+        -p 0x03 \
+        -n 10 \
+        -s 192.168.1.100:5555
+
+- EAL options:
+
+  - -l: core list
+  - --socket-mem: Memory size on each of NUMA nodes.
+  - --huge-dir: Path of hugepage dir.
+  - --proc-type: Process type.
+  - --base-virtaddr: Specify base virtual address.
+
+- Application options:
+
+  - -p: Port mask.
+  - -n: Number of ring PMD.
+  - -s: IP address of controller and port prepared for primary.
+
+
+SPP Secondary
+-------------
+
+Secondary process behaves as a client of primary process and a worker
+for doing tasks for packet processing. There are several kinds of secondary
+process, for example, simply forwarding between ports or classsifying
+by referring packet header.
+
+This section describes about the simplest ``spp_nfv`` which simply forward
+packets similar to ``l2fwd``.
+
+
+Launch spp_nfv on Host
+~~~~~~~~~~~~~~~~~~~~~~
+
+Run ``spp_nfv`` with options.
+
+.. code-block:: console
+
+    # terminal 4
+    $ cd /path/to/spp
+    $ sudo ./src/nfv/x86_64-native-linuxapp-gcc/spp_nfv \
+        -l 2-3 -n 4 \
+        --proc-type=secondary \
+        -- \
+        -n 1 \
+        -s 192.168.1.100:6666
+
+- EAL options:
+
+  - -l: core list (two cores required)
+  - --proc-type: process type
+
+- Application options:
+
+  - -n: secondary ID
+  - -s: IP address of controller and port prepared for secondary
+
+Secondary ID is used to identify for sending messages and must be
+unique among all of secondaries.
+If you attempt to launch a secondary process with the same ID, it
+is failed.
+
+
+Launch from SPP CLI
+~~~~~~~~~~~~~~~~~~~
+
+You can launch SPP secondary processes from SPP CLI wihtout openning
+other terminals. ``pri; launch`` command is for any of secondary processes
+with specific options. It takes secondary type, ID and options of EAL
+and application itself as similar to launching from terminal.
+Here is an example of launching ``spp_nfv``. You notice that there is no
+``--proc-type secondary`` which should be required for secondary.
+It is added to the options by SPP CLI before launching the process.
+
+.. code-block:: none
+
+    # terminal 2
+    # launch spp_nfv with sec ID 2
+    spp > pri; launch nfv 2 -l 1,2 -m 512 -- -n 2 -s 192.168.1.100:6666
+    Send request to launch nfv:2.
+
+After running this command, you can find ``nfv:2`` is launched
+successfully.
+
+.. code-block:: none
+
+    # terminal 2
+    spp > status
+    - spp-ctl:
+      - address: 192.168.1.100:7777
+    - primary:
+      - status: running
+    - secondary:
+      - processes:
+        1: nfv:2
+
+Instead of displaying log messages in terminal, it outputs the messages
+in a log file. All of log files of secondary processes launched with
+``pri`` are located in ``log/`` directory under the project root.
+The name of log file is found ``log/spp_nfv-2.log``.
+
+.. code-block:: console
+
+    # terminal 5
+    $ tail -f log/spp_nfv-2.log
+    SPP_NFV: Used lcores: 1 2
+    SPP_NFV: entering main loop on lcore 2
+    SPP_NFV: My ID 2 start handling message
+    SPP_NFV: [Press Ctrl-C to quit ...]
+    SPP_NFV: Creating socket...
+    SPP_NFV: Trying to connect ... socket 24
+    SPP_NFV: Connected
+    SPP_NFV: Received string: _get_client_id
+    SPP_NFV: token 0 = _get_client_id
+    SPP_NFV: To Server: {"results":[{"result":"success"}],"client_id":2, ...
+
+
+Launch SPP on VM
+~~~~~~~~~~~~~~~~
+
+To communicate DPDK application running on a VM,
+it is required to create a virtual device for the VM.
+In this instruction, launch a VM with qemu command and
+create ``vhost-user`` and ``virtio-net-pci`` devices on the VM.
+
+Before launching VM, you need to prepare a socket file for creating
+``vhost-user`` device.
+Run ``add`` command with resource UID ``vhost:0`` to create socket file.
+
+.. code-block:: none
+
+    # terminal 2
+    spp > nfv 1; add vhost:0
+
+In this example, it creates socket file with index 0 from ``spp_nfv`` of ID 1.
+Socket file is created as ``/tmp/sock0``.
+It is used as a qemu option to add vhost interface.
+
+Launch VM with ``qemu-system-x86_64`` for x86 64bit architecture.
+Qemu takes many options for defining resources including virtual
+devices. You cannot use this example as it is because some options are
+depend on your environment.
+You should specify disk image with ``-hda``, sixth option in this
+example, and ``qemu-ifup`` script for assigning an IP address for the VM
+to be able to access as 12th line.
+
+.. code-block:: console
+
+    # terminal 5
+    $ sudo qemu-system-x86_64 \
+        -cpu host \
+        -enable-kvm \
+        -numa node,memdev=mem \
+        -mem-prealloc \
+        -hda /path/to/image.qcow2 \
+        -m 4096 \
+        -smp cores=4,threads=1,sockets=1 \
+        -object \
+        memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
+        -device e1000,netdev=net0,mac=00:AD:BE:B3:11:00 \
+        -netdev tap,id=net0,ifname=net0,script=/path/to/qemu-ifup \
+        -nographic \
+        -chardev socket,id=chr0,path=/tmp/sock0 \  # /tmp/sock0
+        -netdev vhost-user,id=net1,chardev=chr0,vhostforce \
+        -device virtio-net-pci,netdev=net1,mac=00:AD:BE:B4:11:00 \
+        -monitor telnet::44911,server,nowait
+
+This VM has two network interfaces.
+``-device e1000`` is a management network port
+which requires ``qemu-ifup`` to activate while launching.
+Management network port is used for login and setup the VM.
+``-device virtio-net-pci`` is created for SPP or DPDK application
+running on the VM.
+
+``vhost-user`` is a backend of ``virtio-net-pci`` which requires
+a socket file ``/tmp/sock0`` created from secondary with ``-chardev``
+option.
+
+For other options, please refer to
+`QEMU User Documentation
+<https://qemu.weilnetz.de/doc/qemu-doc.html>`_.
+
+.. note::
+
+    In general, you need to prepare several qemu images for launcing
+    several VMs, but installing DPDK and SPP for several images is bother
+    and time consuming.
+
+    You can shortcut this tasks by creating a template image and copy it
+    to the VMs. It is just one time for installing for template.
+
+After VM is booted, you install DPDK and SPP in the VM as in the host.
+IP address of the VM is assigned while it is created and you can find
+the address in a file generated from libvirt if you use Ubuntu.
+
+.. code-block:: console
+
+    # terminal 5
+    $ cat /var/lib/libvirt/dnsmasq/virbr0.status
+    [
+        {
+            "ip-address": "192.168.122.100",
+            ...
+
+    # Login VM, install DPDK and SPP
+    $ ssh user@192.168.122.100
+    ...
+
+It is recommended to configure ``/etc/default/grub`` for hugepages and
+reboot the VM after installation.
+
+Finally, login to the VM, bind ports to DPDK and launch ``spp-ctl``
+and ``spp_primamry``.
+You should add ``-b`` option to be accessed from SPP CLI on host.
+
+.. code-block:: console
+
+    # terminal 5
+    $ ssh user@192.168.122.100
+    $ cd /path/to/spp
+    $ python3 src/spp-ctl/spp-ctl -b 192.168.122.100
+    ...
+
+Confirm that virtio interfaces are under the management of DPDK before
+launching DPDK processes.
+
+.. code-block:: console
+
+    # terminal 6
+    $ ssh user@192.168.122.100
+    $ cd /path/to/spp
+    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
+        -l 1 -n 4 \
+        -m 1024 \
+        --huge-dir=/dev/hugepages \
+        --proc-type=primary \
+        --base-virtaddr 0x100000000
+        -- \
+        -p 0x03 \
+        -n 6 \
+        -s 192.168.122.100:5555
+
+You can configure SPP running on the VM from SPP CLI.
+Use ``server`` command to switch node under the management.
+
+.. code-block:: none
+
+    # terminal 2
+    # show list of spp-ctl nodes
+    spp > server
+    1: 192.168.1.100:7777 *
+    2: 192.168.122.100:7777
+
+    # change node under the management
+    spp > server 2
+    Switch spp-ctl to "2: 192.168.122.100:7777".
+
+    # confirm node is switched
+    spp > server
+    1: 192.168.1.100:7777
+    2: 192.168.122.100:7777 *
+
+    # configure SPP on VM
+    spp > status
+    ...
+
+Now, you are ready to setup your network environment for DPDK and non-DPDK
+applications with SPP.
+SPP enables users to configure service function chaining between applications
+running on host and VMs.
+Usecases of network configuration are explained in the next chapter.
diff --git a/docs/guides/gsg/index.rst b/docs/guides/gsg/index.rst
new file mode 100644
index 0000000..c73d8bb
--- /dev/null
+++ b/docs/guides/gsg/index.rst
@@ -0,0 +1,14 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation
+
+Getting Started Guide
+=====================
+
+.. toctree::
+   :maxdepth: 2
+   :numbered:
+
+   setup
+   install
+   howto_use
+   performance_opt
diff --git a/docs/guides/gsg/install.rst b/docs/guides/gsg/install.rst
new file mode 100644
index 0000000..3bf5246
--- /dev/null
+++ b/docs/guides/gsg/install.rst
@@ -0,0 +1,282 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2017-2019 Nippon Telegraph and Telephone Corporation
+
+
+.. _setup_install_dpdk_spp:
+
+Install DPDK and SPP
+====================
+
+Before using SPP, you need to install DPDK.
+In this document, briefly describ how to install and setup DPDK.
+Refer to `DPDK documentation
+<https://dpdk.org/doc/guides/>`_ for more details.
+For Linux, see `Getting Started Guide for Linux
+<http://www.dpdk.org/doc/guides/linux_gsg/index.html>`_ .
+
+DPDK
+----
+
+Clone repository and compile DPDK in any directory.
+
+.. code-block:: console
+
+    $ cd /path/to/any
+    $ git clone http://dpdk.org/git/dpdk
+
+To compile DPDK, required to install libnuma-devel library.
+
+.. code-block:: console
+
+    $ sudo apt install libnuma-dev
+
+Python and pip are also required if not installed.
+
+.. code-block:: console
+
+    # Python2
+    $ sudo apt install python python-pip
+
+    # Python3
+    $ sudo apt install python3 python3-pip
+
+SPP provides libpcap-based PMD for dumping packet to a file or retrieve
+it from the file.
+To use PCAP PMD, install ``libpcap-dev`` and enable it.
+``text2pcap`` is also required for creating pcap file which
+is included in ``wireshark``.
+
+.. code-block:: console
+
+    $ sudo apt install libpcap-dev
+    $ sudo apt install wireshark
+
+PCAP is disabled by default in DPDK configuration.
+``CONFIG_RTE_LIBRTE_PMD_PCAP`` and ``CONFIG_RTE_PORT_PCAP`` define the
+configuration and enabled it to ``y``.
+
+.. code-block:: console
+
+    # dpdk/config/common_base
+    CONFIG_RTE_LIBRTE_PMD_PCAP=y
+    ...
+    CONFIG_RTE_PORT_PCAP=y
+
+Compile DPDK with target environment.
+
+.. code-block:: console
+
+    $ cd dpdk
+    $ export RTE_SDK=$(pwd)
+    $ export RTE_TARGET=x86_64-native-linuxapp-gcc  # depends on your env
+    $ make install T=$RTE_TARGET
+
+
+SPP
+---
+
+Clone repository and compile SPP in any directory.
+
+.. code-block:: console
+
+    $ cd /path/to/any
+    $ git clone http://dpdk.org/git/apps/spp
+    $ cd spp
+    $ make  # Confirm that $RTE_SDK and $RTE_TARGET are set
+
+It also required to install Python3 and packages for running python scripts
+as following.
+You might need to run ``pip3`` with ``sudo`` if it is failed.
+
+.. code-block:: console
+
+    $ sudo apt update
+    $ sudo apt install python3
+    $ sudo apt install python3-pip
+    $ pip3 install -r requirements.txt
+
+
+Binding Network Ports to DPDK
+-----------------------------
+
+Network ports must be bound to DPDK with a UIO (Userspace IO) driver.
+UIO driver is for mapping device memory to userspace and registering
+interrupts.
+
+UIO Drivers
+~~~~~~~~~~~
+
+You usually use the standard ``uio_pci_generic`` for many use cases
+or ``vfio-pci`` for more robust and secure cases.
+Both of drivers are included by default in modern Linux kernel.
+
+.. code-block:: console
+
+    # Activate uio_pci_generic
+    $ sudo modprobe uio_pci_generic
+
+    # or vfio-pci
+    $ sudo modprobe vfio-pci
+
+You can also use kmod included in DPDK instead of ``uio_pci_generic``
+or ``vfio-pci``.
+
+.. code-block:: console
+
+    $ sudo modprobe uio
+    $ sudo insmod kmod/igb_uio.ko
+
+Binding Network Ports
+~~~~~~~~~~~~~~~~~~~~~
+
+Once UIO driver is activated, bind network ports with the driver.
+DPDK provides ``usertools/dpdk-devbind.py`` for managing devices.
+
+Find ports for binding to DPDK by running the tool with ``-s`` option.
+
+.. code-block:: console
+
+    $ $RTE_SDK/usertools/dpdk-devbind.py --status
+
+    Network devices using DPDK-compatible driver
+    ============================================
+    <none>
+
+    Network devices using kernel driver
+    ===================================
+    0000:29:00.0 '82571EB ... 10bc' if=enp41s0f0 drv=e1000e unused=
+    0000:29:00.1 '82571EB ... 10bc' if=enp41s0f1 drv=e1000e unused=
+    0000:2a:00.0 '82571EB ... 10bc' if=enp42s0f0 drv=e1000e unused=
+    0000:2a:00.1 '82571EB ... 10bc' if=enp42s0f1 drv=e1000e unused=
+
+    Other Network devices
+    =====================
+    <none>
+    ....
+
+You can find network ports are bound to kernel driver and not to DPDK.
+To bind a port to DPDK, run ``dpdk-devbind.py`` with specifying a driver
+and a device ID.
+Device ID is a PCI address of the device or more friendly style like
+``eth0`` found by ``ifconfig`` or ``ip`` command..
+
+.. code-block:: console
+
+    # Bind a port with 2a:00.0 (PCI address)
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 2a:00.0
+
+    # or eth0
+    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth0
+
+
+After binding two ports, you can find it is under the DPDK driver and
+cannot find it by using ``ifconfig`` or ``ip``.
+
+.. code-block:: console
+
+    $ $RTE_SDK/usertools/dpdk-devbind.py -s
+
+    Network devices using DPDK-compatible driver
+    ============================================
+    0000:2a:00.0 '82571EB ... 10bc' drv=uio_pci_generic unused=vfio-pci
+    0000:2a:00.1 '82571EB ... 10bc' drv=uio_pci_generic unused=vfio-pci
+
+    Network devices using kernel driver
+    ===================================
+    0000:29:00.0 '...' if=enp41s0f0 drv=e1000e unused=vfio-pci,uio_pci_generic
+    0000:29:00.1 '...' if=enp41s0f1 drv=e1000e unused=vfio-pci,uio_pci_generic
+
+    Other Network devices
+    =====================
+    <none>
+    ....
+
+
+Confirm DPDK is setup properly
+------------------------------
+
+You can confirm if you are ready to use DPDK by running DPDK's sample
+application. ``l2fwd`` is good choice to confirm it before SPP because
+it is very similar to SPP's worker process for forwarding.
+
+.. code-block:: console
+
+   $ cd $RTE_SDK/examples/l2fwd
+   $ make
+     CC main.o
+     LD l2fwd
+     INSTALL-APP l2fwd
+     INSTALL-MAP l2fwd.map
+
+In this case, run this application simply with just two options
+while DPDK has many kinds of options.
+
+  - -l: core list
+  - -p: port mask
+
+.. code-block:: console
+
+   $ sudo ./build/app/l2fwd \
+     -l 1-2 \
+     -- -p 0x3
+
+It must be separated with ``--`` to specify which option is
+for EAL or application.
+Refer to `L2 Forwarding Sample Application
+<https://dpdk.org/doc/guides/sample_app_ug/l2_forward_real_virtual.html>`_
+for more details.
+
+
+Build Documentation
+-------------------
+
+This documentation is able to be biult as HTML and PDF formats from make
+command. Before compiling the documentation, you need to install some of
+packages required to compile.
+
+For HTML documentation, install sphinx and additional theme.
+
+.. code-block:: console
+
+    $ pip install sphinx
+    $ pip install sphinx-rtd-theme
+
+For PDF, inkscape and latex packages are required.
+
+.. code-block:: console
+
+    $ sudo apt install inkscape
+    $ sudo apt install texlive-latex-extra
+    $ sudo apt install texlive-latex-recommended
+
+You might also need to install ``latexmk`` in addition to if you use
+Ubuntu 18.04 LTS.
+
+.. code-block:: console
+
+    $ sudo apt install latexmk
+
+HTML documentation is compiled by running make with ``doc-html``. This
+command launch sphinx for compiling HTML documents.
+Compiled HTML files are created in ``docs/guides/_build/html/`` and
+You can find the top page ``index.html`` in the directory.
+
+.. code-block:: console
+
+    $ make doc-html
+
+PDF documentation is compiled with ``doc-pdf`` which runs latex for.
+Compiled PDF file is created as ``docs/guides/_build/html/SoftPatchPanel.pdf``.
+
+.. code-block:: console
+
+    $ make doc-pdf
+
+You can also compile both of HTML and PDF documentations with ``doc`` or
+``doc-all``.
+
+.. code-block:: console
+
+    $ make doc
+    # or
+    $ make doc-all
diff --git a/docs/guides/gsg/performance_opt.rst b/docs/guides/gsg/performance_opt.rst
new file mode 100644
index 0000000..d4a85f1
--- /dev/null
+++ b/docs/guides/gsg/performance_opt.rst
@@ -0,0 +1,82 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation
+
+Performance Optimization
+========================
+
+Reduce Context Switches
+-----------------------
+
+Use the ``isolcpus`` Linux kernel parameter to isolate them
+from Linux scheduler to reduce context switches.
+It prevents workloads of other processes than DPDK running on
+reserved cores with ``isolcpus`` parameter.
+
+For Ubuntu 16.04, define ``isolcpus`` in ``/etc/default/grub``.
+
+.. code-block:: console
+
+    GRUB_CMDLINE_LINUX_DEFAULT=“isolcpus=0-3,5,7”
+
+The value of this ``isolcpus`` depends on your environment and usage.
+This example reserves six cores(0,1,2,3,5,7).
+
+
+Optimizing QEMU Performance
+---------------------------
+
+QEMU process runs threads for vcpu emulation. It is effective strategy
+for pinning vcpu threads to decicated cores.
+
+To find vcpu threads, you use ``ps`` command to find PID of QEMU process
+and ``pstree`` command for threads launched from QEMU process.
+
+.. code-block:: console
+
+    $ ps ea
+       PID TTY     STAT  TIME COMMAND
+    192606 pts/11  Sl+   4:42 ./x86_64-softmmu/qemu-system-x86_64 -cpu host ...
+
+Run ``pstree`` with ``-p`` and this PID to find all threads launched from QEMU.
+
+.. code-block:: console
+
+    $ pstree -p 192606
+    qemu-system-x86(192606)--+--{qemu-system-x8}(192607)
+                             |--{qemu-system-x8}(192623)
+                             |--{qemu-system-x8}(192624)
+                             |--{qemu-system-x8}(192625)
+                             |--{qemu-system-x8}(192626)
+
+Update affinity by using ``taskset`` command to pin vcpu threads.
+The vcpu threads is listed from the second entry and later.
+In this example, assign PID 192623 to core 4, PID 192624 to core 5
+and so on.
+
+.. code-block:: console
+
+    $ sudo taskset -pc 4 192623
+    pid 192623's current affinity list: 0-31
+    pid 192623's new affinity list: 4
+    $ sudo taskset -pc 5 192624
+    pid 192624's current affinity list: 0-31
+    pid 192624's new affinity list: 5
+    $ sudo taskset -pc 6 192625
+    pid 192625's current affinity list: 0-31
+    pid 192625's new affinity list: 6
+    $ sudo taskset -pc 7 192626
+    pid 192626's current affinity list: 0-31
+    pid 192626's new affinity list: 7
+
+
+Reference
+---------
+
+* [1] `Best pinning strategy for latency/performance trade-off
+  <https://www.redhat.com/archives/vfio-users/2017-February/msg00010.html>`_
+* [2] `PVP reference benchmark setup using testpmd
+  <http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html>`_
+* [3] `Enabling Additional Functionality
+  <http://dpdk.org/doc/guides/linux_gsg/enable_func.html>`_
+* [4] `How to get best performance with NICs on Intel platforms
+  <http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html>`_
diff --git a/docs/guides/gsg/setup.rst b/docs/guides/gsg/setup.rst
new file mode 100644
index 0000000..ebcfeee
--- /dev/null
+++ b/docs/guides/gsg/setup.rst
@@ -0,0 +1,152 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2010-2014 Intel Corporation
+    Copyright(c) 2017-2019 Nippon Telegraph and Telephone Corporation
+
+
+.. _gsg_setup:
+
+Setup
+=====
+
+This documentation is described for Ubuntu 16.04 and later.
+
+
+Reserving Hugepages
+-------------------
+
+Hugepages must be enabled for running DPDK with high performance.
+Hugepage support is required to reserve large amount size of pages,
+2MB or 1GB per page, to less TLB (Translation Lookaside Buffers) and
+to reduce cache miss.
+Less TLB means that it reduce the time for translating virtual address
+to physical.
+
+Hugepage reservation might be different for 2MB or 1GB.
+
+For 1GB page, hugepage setting must be activated while booting system.
+It must be defined in boot loader configuration, usually is
+``/etc/default/grub``.
+Add an entry to define pagesize and the number of pages.
+Here is an example. ``hugepagesz`` is for the size and ``hugepages``
+is for the number of pages.
+
+.. code-block:: console
+
+    # /etc/default/grub
+    GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=8"
+
+.. note::
+
+    1GB hugepages might not be supported in your machine. It depends on
+    that CPUs support 1GB pages or not. You can check it by referring
+    ``/proc/cpuinfo``. If it is supported, you can find ``pdpe1gb`` in
+    the ``flags`` attribute.
+
+    .. code-block:: console
+
+        $ cat /proc/cpuinfo | grep pdpe1gb
+        flags           : fpu vme ... pdpe1gb ...
+
+You should run ``update-grub`` after editing to update grub's config file,
+or this configuration is not activated.
+
+.. code-block:: console
+
+   $ sudo update-grub
+   Generating grub configuration file ...
+
+For 2MB page, you can activate hugepages while booting or at anytime
+after system is booted.
+Define hugepages setting in ``/etc/default/grub`` to activate it while
+booting, or overwrite the number of 2MB hugepages as following.
+
+.. code-block:: console
+
+    $ echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+In this case, 1024 pages of 2MB (totally 2048 MB) are reserved.
+
+
+Mount hugepages
+---------------
+
+Make the memory available for using hugepages from DPDK.
+
+.. code-block:: console
+
+    $ mkdir /mnt/huge
+    $ mount -t hugetlbfs nodev /mnt/huge
+
+It is also available while booting by adding a configuration of mount
+point in ``/etc/fstab``, or after booted.
+
+The mount point for 2MB or 1GB can be made permanent accross reboot.
+For 2MB, it is no need to declare the size of hugepages explicity.
+
+.. code-block:: console
+
+    # /etc/fstab
+    nodev /mnt/huge hugetlbfs defaults 0 0
+
+For 1GB, the size of hugepage must be specified.
+
+.. code-block:: console
+
+    # /etc/fstab
+    nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
+
+
+Disable ASLR
+------------
+
+SPP is a DPDK multi-process application and there are a number of
+`limitations
+<https://dpdk.org/doc/guides/prog_guide/multi_proc_support.html#multi-process-limitations>`_
+.
+
+Address-Space Layout Randomization (ASLR) is a security feature for
+memory protection, but may cause a failure of memory
+mapping while starting multi-process application as discussed in
+`dpdk-dev
+<http://dpdk.org/ml/archives/dev/2014-September/005236.html>`_
+.
+
+ASLR can be disabled by assigning ``kernel.randomize_va_space`` to
+``0``, or be enabled by assigning it to ``2``.
+
+.. code-block:: console
+
+    # disable ASLR
+    $ sudo sysctl -w kernel.randomize_va_space=0
+
+    # enable ASLR
+    $ sudo sysctl -w kernel.randomize_va_space=2
+
+You can check the value as following.
+
+.. code-block:: console
+
+    $ sysctl -n kernel.randomize_va_space
+
+
+Vhost Client Mode
+-----------------
+
+SPP secondary process supports ``--vhost-client`` options for using vhost port.
+In vhost client mode, qemu creates socket file instead of secondary process.
+It means that you can launch a VM before secondary process create vhost port.
+
+.. note::
+
+    Vhost client mode is supported by qemu 2.7 or later.
+
+
+Python 2 or 3 ?
+---------------
+
+In SPP, Python3 is required only for running ``spp-ctl``. Other python scripts
+are able to be launched both of Python2 and 3.
+
+Howevrer, Python2 will not be maintained after 2020 and SPP is going to update
+only supporting Python3.
+In SPP, it is planned to support only Python3 before the end of 2019.
diff --git a/docs/guides/index.rst b/docs/guides/index.rst
index d230081..a64a7a3 100644
--- a/docs/guides/index.rst
+++ b/docs/guides/index.rst
@@ -10,7 +10,7 @@ SPP documentation
 
    overview
    design/index
-   setup/index
+   gsg/index
    use_cases/index
    commands/index
    tools/index
diff --git a/docs/guides/setup/getting_started.rst b/docs/guides/setup/getting_started.rst
deleted file mode 100644
index 8cda22b..0000000
--- a/docs/guides/setup/getting_started.rst
+++ /dev/null
@@ -1,420 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2010-2014 Intel Corporation
-    Copyright(c) 2017-2019 Nippon Telegraph and Telephone Corporation
-
-.. _getting_started:
-
-Getting Started
-===============
-
-This documentation is described for Ubuntu 16.04 and later.
-
-Setup
------
-
-Reserving Hugepages
-~~~~~~~~~~~~~~~~~~~
-
-Hugepages must be enabled for running DPDK with high performance.
-Hugepage support is required to reserve large amount size of pages,
-2MB or 1GB per page, to less TLB (Translation Lookaside Buffers) and
-to reduce cache miss.
-Less TLB means that it reduce the time for translating virtual address
-to physical.
-
-Hugepage reservation might be different for 2MB or 1GB.
-
-For 1GB page, hugepage setting must be activated while booting system.
-It must be defined in boot loader configuration, usually is
-``/etc/default/grub``.
-Add an entry to define pagesize and the number of pages.
-Here is an example. ``hugepagesz`` is for the size and ``hugepages``
-is for the number of pages.
-
-.. code-block:: console
-
-    # /etc/default/grub
-    GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=8"
-
-.. note::
-
-    1GB hugepages might not be supported in your machine. It depends on
-    that CPUs support 1GB pages or not. You can check it by referring
-    ``/proc/cpuinfo``. If it is supported, you can find ``pdpe1gb`` in
-    the ``flags`` attribute.
-
-    .. code-block:: console
-
-        $ cat /proc/cpuinfo | grep pdpe1gb
-        flags           : fpu vme ... pdpe1gb ...
-
-You should run ``update-grub`` after editing to update grub's config file,
-or this configuration is not activated.
-
-.. code-block:: console
-
-   $ sudo update-grub
-   Generating grub configuration file ...
-
-For 2MB page, you can activate hugepages while booting or at anytime
-after system is booted.
-Define hugepages setting in ``/etc/default/grub`` to activate it while
-booting, or overwrite the number of 2MB hugepages as following.
-
-.. code-block:: console
-
-    $ echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-
-In this case, 1024 pages of 2MB (totally 2048 MB) are reserved.
-
-
-Mount hugepages
-~~~~~~~~~~~~~~~
-
-Make the memory available for using hugepages from DPDK.
-
-.. code-block:: console
-
-    $ mkdir /mnt/huge
-    $ mount -t hugetlbfs nodev /mnt/huge
-
-It is also available while booting by adding a configuration of mount
-point in ``/etc/fstab``, or after booted.
-
-The mount point for 2MB or 1GB can be made permanent accross reboot.
-For 2MB, it is no need to declare the size of hugepages explicity.
-
-.. code-block:: console
-
-    # /etc/fstab
-    nodev /mnt/huge hugetlbfs defaults 0 0
-
-For 1GB, the size of hugepage must be specified.
-
-.. code-block:: console
-
-    # /etc/fstab
-    nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
-
-
-Disable ASLR
-~~~~~~~~~~~~
-
-SPP is a DPDK multi-process application and there are a number of
-`limitations
-<https://dpdk.org/doc/guides/prog_guide/multi_proc_support.html#multi-process-limitations>`_
-.
-
-Address-Space Layout Randomization (ASLR) is a security feature for
-memory protection, but may cause a failure of memory
-mapping while starting multi-process application as discussed in
-`dpdk-dev
-<http://dpdk.org/ml/archives/dev/2014-September/005236.html>`_
-.
-
-ASLR can be disabled by assigning ``kernel.randomize_va_space`` to
-``0``, or be enabled by assigning it to ``2``.
-
-.. code-block:: console
-
-    # disable ASLR
-    $ sudo sysctl -w kernel.randomize_va_space=0
-
-    # enable ASLR
-    $ sudo sysctl -w kernel.randomize_va_space=2
-
-You can check the value as following.
-
-.. code-block:: console
-
-    $ sysctl -n kernel.randomize_va_space
-
-.. _install_dpdk_spp:
-
-Install DPDK and SPP
---------------------
-
-Before using SPP, you need to install DPDK.
-In this document, briefly describ how to install and setup DPDK.
-Refer to `DPDK documentation
-<https://dpdk.org/doc/guides/>`_ for more details.
-For Linux, see `Getting Started Guide for Linux
-<http://www.dpdk.org/doc/guides/linux_gsg/index.html>`_ .
-
-DPDK
-~~~~
-
-Clone repository and compile DPDK in any directory.
-
-.. code-block:: console
-
-    $ cd /path/to/any
-    $ git clone http://dpdk.org/git/dpdk
-
-To compile DPDK, required to install libnuma-devel library.
-
-.. code-block:: console
-
-    $ sudo apt install libnuma-dev
-
-Python and pip are also required if not installed.
-
-.. code-block:: console
-
-    # Python2
-    $ sudo apt install python python-pip
-
-    # Python3
-    $ sudo apt install python3 python3-pip
-
-SPP provides libpcap-based PMD for dumping packet to a file or retrieve
-it from the file.
-To use PCAP PMD, install ``libpcap-dev`` and enable it.
-``text2pcap`` is also required for creating pcap file which
-is included in ``wireshark``.
-
-.. code-block:: console
-
-    $ sudo apt install libpcap-dev
-    $ sudo apt install wireshark
-
-PCAP is disabled by default in DPDK configuration.
-``CONFIG_RTE_LIBRTE_PMD_PCAP`` and ``CONFIG_RTE_PORT_PCAP`` define the
-configuration and enabled it to ``y``.
-
-.. code-block:: console
-
-    # dpdk/config/common_base
-    CONFIG_RTE_LIBRTE_PMD_PCAP=y
-    ...
-    CONFIG_RTE_PORT_PCAP=y
-
-Compile DPDK with target environment.
-
-.. code-block:: console
-
-    $ cd dpdk
-    $ export RTE_SDK=$(pwd)
-    $ export RTE_TARGET=x86_64-native-linuxapp-gcc  # depends on your env
-    $ make install T=$RTE_TARGET
-
-
-SPP
-~~~
-
-Clone repository and compile SPP in any directory.
-
-.. code-block:: console
-
-    $ cd /path/to/any
-    $ git clone http://dpdk.org/git/apps/spp
-    $ cd spp
-    $ make  # Confirm that $RTE_SDK and $RTE_TARGET are set
-
-It also required to install Python3 and packages for running python scripts
-as following.
-You might need to run ``pip3`` with ``sudo`` if it is failed.
-
-.. code-block:: console
-
-    $ sudo apt update
-    $ sudo apt install python3
-    $ sudo apt install python3-pip
-    $ pip3 install -r requirements.txt
-
-
-Python 2 or 3 ?
-~~~~~~~~~~~~~~~
-
-In SPP, Python3 is required only for running ``spp-ctl``. Other python scripts
-are able to be launched both of Python2 and 3.
-
-Howevrer, Python2 will not be maintained after 2020 and SPP is going to update
-only supporting Python3.
-In SPP, it is planned to support only Python3 before the end of 2019.
-
-
-Binding Network Ports to DPDK
------------------------------
-
-Network ports must be bound to DPDK with a UIO (Userspace IO) driver.
-UIO driver is for mapping device memory to userspace and registering
-interrupts.
-
-UIO Drivers
-~~~~~~~~~~~
-
-You usually use the standard ``uio_pci_generic`` for many use cases
-or ``vfio-pci`` for more robust and secure cases.
-Both of drivers are included by default in modern Linux kernel.
-
-.. code-block:: console
-
-    # Activate uio_pci_generic
-    $ sudo modprobe uio_pci_generic
-
-    # or vfio-pci
-    $ sudo modprobe vfio-pci
-
-You can also use kmod included in DPDK instead of ``uio_pci_generic``
-or ``vfio-pci``.
-
-.. code-block:: console
-
-    $ sudo modprobe uio
-    $ sudo insmod kmod/igb_uio.ko
-
-Binding Network Ports
-~~~~~~~~~~~~~~~~~~~~~
-
-Once UIO driver is activated, bind network ports with the driver.
-DPDK provides ``usertools/dpdk-devbind.py`` for managing devices.
-
-Find ports for binding to DPDK by running the tool with ``-s`` option.
-
-.. code-block:: console
-
-    $ $RTE_SDK/usertools/dpdk-devbind.py --status
-
-    Network devices using DPDK-compatible driver
-    ============================================
-    <none>
-
-    Network devices using kernel driver
-    ===================================
-    0000:29:00.0 '82571EB ... 10bc' if=enp41s0f0 drv=e1000e unused=
-    0000:29:00.1 '82571EB ... 10bc' if=enp41s0f1 drv=e1000e unused=
-    0000:2a:00.0 '82571EB ... 10bc' if=enp42s0f0 drv=e1000e unused=
-    0000:2a:00.1 '82571EB ... 10bc' if=enp42s0f1 drv=e1000e unused=
-
-    Other Network devices
-    =====================
-    <none>
-    ....
-
-You can find network ports are bound to kernel driver and not to DPDK.
-To bind a port to DPDK, run ``dpdk-devbind.py`` with specifying a driver
-and a device ID.
-Device ID is a PCI address of the device or more friendly style like
-``eth0`` found by ``ifconfig`` or ``ip`` command..
-
-.. code-block:: console
-
-    # Bind a port with 2a:00.0 (PCI address)
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic 2a:00.0
-
-    # or eth0
-    ./usertools/dpdk-devbind.py --bind=uio_pci_generic eth0
-
-
-After binding two ports, you can find it is under the DPDK driver and
-cannot find it by using ``ifconfig`` or ``ip``.
-
-.. code-block:: console
-
-    $ $RTE_SDK/usertools/dpdk-devbind.py -s
-
-    Network devices using DPDK-compatible driver
-    ============================================
-    0000:2a:00.0 '82571EB ... 10bc' drv=uio_pci_generic unused=vfio-pci
-    0000:2a:00.1 '82571EB ... 10bc' drv=uio_pci_generic unused=vfio-pci
-
-    Network devices using kernel driver
-    ===================================
-    0000:29:00.0 '...' if=enp41s0f0 drv=e1000e unused=vfio-pci,uio_pci_generic
-    0000:29:00.1 '...' if=enp41s0f1 drv=e1000e unused=vfio-pci,uio_pci_generic
-
-    Other Network devices
-    =====================
-    <none>
-    ....
-
-
-Confirm DPDK is setup properly
-------------------------------
-
-You can confirm if you are ready to use DPDK by running DPDK's sample
-application. ``l2fwd`` is good choice to confirm it before SPP because
-it is very similar to SPP's worker process for forwarding.
-
-.. code-block:: console
-
-   $ cd $RTE_SDK/examples/l2fwd
-   $ make
-     CC main.o
-     LD l2fwd
-     INSTALL-APP l2fwd
-     INSTALL-MAP l2fwd.map
-
-In this case, run this application simply with just two options
-while DPDK has many kinds of options.
-
-  - -l: core list
-  - -p: port mask
-
-.. code-block:: console
-
-   $ sudo ./build/app/l2fwd \
-     -l 1-2 \
-     -- -p 0x3
-
-It must be separated with ``--`` to specify which option is
-for EAL or application.
-Refer to `L2 Forwarding Sample Application
-<https://dpdk.org/doc/guides/sample_app_ug/l2_forward_real_virtual.html>`_
-for more details.
-
-
-Build Documentation
--------------------
-
-This documentation is able to be biult as HTML and PDF formats from make
-command. Before compiling the documentation, you need to install some of
-packages required to compile.
-
-For HTML documentation, install sphinx and additional theme.
-
-.. code-block:: console
-
-    $ pip install sphinx
-    $ pip install sphinx-rtd-theme
-
-For PDF, inkscape and latex packages are required.
-
-.. code-block:: console
-
-    $ sudo apt install inkscape
-    $ sudo apt install texlive-latex-extra
-    $ sudo apt install texlive-latex-recommended
-
-You might also need to install ``latexmk`` in addition to if you use
-Ubuntu 18.04 LTS.
-
-.. code-block:: console
-
-    $ sudo apt install latexmk
-
-HTML documentation is compiled by running make with ``doc-html``. This
-command launch sphinx for compiling HTML documents.
-Compiled HTML files are created in ``docs/guides/_build/html/`` and
-You can find the top page ``index.html`` in the directory.
-
-.. code-block:: console
-
-    $ make doc-html
-
-PDF documentation is compiled with ``doc-pdf`` which runs latex for.
-Compiled PDF file is created as ``docs/guides/_build/html/SoftPatchPanel.pdf``.
-
-.. code-block:: console
-
-    $ make doc-pdf
-
-You can also compile both of HTML and PDF documentations with ``doc`` or
-``doc-all``.
-
-.. code-block:: console
-
-    $ make doc
-    # or
-    $ make doc-all
diff --git a/docs/guides/setup/howto_use.rst b/docs/guides/setup/howto_use.rst
deleted file mode 100644
index cc3dd9b..0000000
--- a/docs/guides/setup/howto_use.rst
+++ /dev/null
@@ -1,578 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2010-2014 Intel Corporation
-
-.. _spp_setup_howto_use:
-
-How to Use
-==========
-
-As described in :ref:`Design<spp_overview_design>`, SPP consists of
-primary process for managing resources, secondary processes for
-forwarding packet, and SPP controller to accept user commands and
-send it to SPP processes.
-
-You should keep in mind the order of launching processes.
-Primary process must be launched before secondary processes.
-``spp-ctl`` need to be launched before SPP CLI, but no need to be launched
-before other processes. SPP CLI is launched from ``spp.py``.
-If ``spp-ctl`` is not running after primary and
-secondary processes are launched, processes wait ``spp-ctl`` is launched.
-
-In general, ``spp-ctl`` should be launched first, then SPP CLI and
-``spp_primary`` in each of terminals without running as background process.
-After ``spp_primary``, you launch secondary processes for your usage.
-If you just patch two DPDK applications on host, it is enough to use one
-``spp_nfv``, or use ``spp_vf`` if you need to classify packets.
-How to use of these secondary processes is described in next chapters.
-
-
-SPP Controller
---------------
-
-SPP Controller consists of ``spp-ctl`` and SPP CLI.
-
-spp-ctl
-~~~~~~~
-
-``spp-ctl`` is a HTTP server for REST APIs for managing SPP
-processes. In default, it is accessed with URL ``http://127.0.0.1:7777``
-or ``http://localhost:7777``.
-``spp-ctl`` shows no messages at first after launched, but shows
-log messages for events such as receiving a request or terminating
-a process.
-
-.. code-block:: console
-
-    # terminal 1
-    $ cd /path/to/spp
-    $ python3 src/spp-ctl/spp-ctl
-
-Notice that ``spp-ctl`` is implemented in ``python3`` and cannot be
-launched with ``python2``.
-
-It has a option ``-b`` for binding address explicitly to be accessed
-from other than default, ``127.0.0.1`` or ``localhost``.
-If you deploy SPP on multiple nodes, you might need to use ``-b`` option
-it to be accessed from other processes running on other than local node.
-
-.. code-block:: console
-
-    # launch with URL http://192.168.1.100:7777
-    $ python3 src/spp-ctl/spp-ctl -b 192.168.1.100
-
-``spp-ctl`` is the most important process in SPP. For some usecases,
-you might better to manage this process with ``systemd``.
-Here is a simple example of service file for systemd.
-
-.. code-block:: none
-
-    [Unit]
-    Description = SPP Controller
-
-    [Service]
-    ExecStart = /usr/bin/python3 /path/to/spp/src/spp-ctl/spp-ctl
-    User = root
-
-All of options can be referred with help option ``-h``.
-
-.. code-block:: console
-
-    python3 ./src/spp-ctl/spp-ctl -h
-    usage: spp-ctl [-h] [-b BIND_ADDR] [-p PRI_PORT] [-s SEC_PORT] [-a API_PORT]
-
-    SPP Controller
-
-    optional arguments:
-      -h, --help            show this help message and exit
-      -b BIND_ADDR, --bind-addr BIND_ADDR
-                            bind address, default=localhost
-      -p PRI_PORT           primary port, default=5555
-      -s SEC_PORT           secondary port, default=6666
-      -a API_PORT           web api port, default=7777
-
-.. _spp_setup_howto_use_spp_cli:
-
-SPP CLI
-~~~~~~~
-
-If ``spp-ctl`` is launched, go to the next terminal and launch SPP CLI.
-It supports both of Python 2 and 3, so use ``python`` in this case.
-
-.. code-block:: console
-
-    # terminal 2
-    $ cd /path/to/spp
-    $ python src/spp.py
-    Welcome to the spp.   Type help or ? to list commands.
-
-    spp >
-
-If you launched ``spp-ctl`` with ``-b`` option, you also need to use the same
-option for ``spp.py``, or failed to connect and to launch.
-
-.. code-block:: console
-
-    # terminal 2
-    # bind to spp-ctl on http://192.168.1.100:7777
-    $ python src/spp.py -b 192.168.1.100
-    Welcome to the spp.   Type help or ? to list commands.
-
-    spp >
-
-One of the typical usecase of this option is to deploy multiple SPP nodes.
-:numref:`figure_spp_howto_multi_spp` is an exmaple of multiple nodes case.
-There are three nodes on each of which ``spp-ctl`` is running for accepting
-requests for SPP. These ``spp-ctl`` processes are controlled from
-``spp.py`` on host1 and all of paths are configured across the nodes.
-It is also able to be configured between hosts by changing
-soure or destination of phy ports.
-
-.. _figure_spp_howto_multi_spp:
-
-.. figure:: ../images/setup/howto_use/spp_howto_multi_spp.*
-   :width: 80%
-
-   Multiple SPP nodes
-
-Launch SPP CLI with three entries of binding addresses with ``-b`` option
-for specifying ``spp-ctl``.
-
-.. code-block:: console
-
-    # Launch SPP CLI with three nodes
-    $ python src/spp.py -b 192.168.11.101 \
-        -b 192.168.11.102 \
-        -b 192.168.11.103 \
-
-You can also add nodes after SPP CLI is launched.
-
-.. code-block:: console
-
-    # Launch SPP CLI with one node
-    $ python src/spp.py -b 192.168.11.101
-    Welcome to the SPP CLI. Type `help` or `?` to list commands.
-
-    # Add the rest of nodes after
-    spp > server add 192.168.11.102
-    Registered spp-ctl "192.168.11.102:7777".
-    spp > server add 192.168.11.103
-    Registered spp-ctl "192.168.11.103:7777".
-
-You find the host under the management of SPP CLI and switch with
-``server`` command.
-
-.. code-block:: none
-
-    spp > server list
-      1: 192.168.1.101:7777 *
-      2: 192.168.1.102:7777
-      3: 192.168.1.103:7777
-
-To change the server, add an index number after ``server``.
-
-.. code-block:: none
-
-    # Launch SPP CLI
-    spp > server 3
-    Switch spp-ctl to "3: 192.168.1.103:7777".
-
-All of options can be referred with help option ``-h``.
-
-.. code-block:: console
-
-    $ python src/spp.py -h
-    usage: spp.py [-h] [-b BIND_ADDR] [-a API_PORT]
-
-    SPP Controller
-
-    optional arguments:
-      -h, --help            show this help message and exit
-      -b BIND_ADDR, --bind-addr BIND_ADDR
-                            bind address, default=127.0.0.1
-      -a API_PORT, --api-port API_PORT
-                        bind address, default=777
-
-All of SPP CLI commands are described in :doc:`../../commands/index`.
-
-
-Default Configuration
-^^^^^^^^^^^^^^^^^^^^^
-
-SPP CLI imports several params from configuration file while launching.
-Some of behaviours of SPP CLI depends on the params.
-The default configuration is defined in
-``src/controller/config/default.yml``.
-You can change this params by editing the config file, or from ``config``
-command after SPP CLI is launched.
-
-All of config params are referred by ``config`` command.
-
-.. code-block:: none
-
-    # show list of config
-    spp > config
-    - max_secondary: "16"       # The maximum number of secondary processes
-    - sec_nfv_nof_lcores: "1"   # Default num of lcores for workers of spp_nfv
-    ....
-
-To change the config, set a value for the param.
-Here is an example for changing command prompt.
-
-.. code-block:: none
-
-    # set prompt to "$ spp "
-    spp > config prompt "$ spp "
-    Set prompt: "$ spp "
-    $ spp
-
-
-SPP Primary
------------
-
-SPP primary is a resource manager and has a responsibility for
-initializing EAL for secondary processes. It should be launched before
-secondary.
-
-To launch SPP primary, run ``spp_primary`` with specific options.
-
-.. code-block:: console
-
-    # terminal 3
-    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
-        -l 1 -n 4 \
-        --socket-mem 512,512 \
-        --huge-dir /dev/hugepages \
-        --proc-type primary \
-        --base-virtaddr 0x100000000
-        -- \
-        -p 0x03 \
-        -n 10 \
-        -s 192.168.1.100:5555
-
-SPP primary takes EAL options and application specific options.
-
-Core list option ``-l`` is for assigining cores and SPP primary requires just
-one core. You can use core mask option ``-c`` instead of ``-l``.
-You can use ``-m 1024`` for memory reservation instead of
-``--socket-mem 1024,0`` if you use single NUMA node. In this case, 512 MB is
-reserved on each of nodes.
-
-.. note::
-
-   If you use DPDK v18.08 or before,
-   you should consider give ``--base-virtaddr`` with 4 GiB or higher value
-   because a secondary process is accidentally failed to mmap while init
-   memory. The reason of the failure is secondary process tries to reserve
-   the region which is already used by some of thread of primary.
-
-   .. code-block:: console
-
-      # Failed to secondary
-      EAL: Could not mmap 17179869184 ... - please use '--base-virtaddr' option
-
-   ``--base-virtaddr`` is to decide base address explicitly to avoid this
-   overlapping. 4 GiB ``0x100000000`` is enough for the purpose.
-
-   If you use DPDK v18.11 or later, ``--base-virtaddr 0x100000000`` is enabled
-   in default. You need to use this option only for changing the default value.
-
-
-In general, one lcore is enough for ``spp_primary``. If you give two or
-more, it uses second lcore to display statistics periodically and does not
-use others.
-
-.. note::
-
-    Anyway, you can get statistics in SPP CLI with ``pri; status`` command
-    actually even if you give only one core.
-
-Primary process sets up physical ports of given port mask with ``-p`` option
-and ring ports of the number of ``-n`` option. Ports of  ``-p`` option is for
-accepting incomming packets and ``-n`` option is for inter-process packet
-forwarding. You can also add ports initialized with ``--vdev`` option to
-physical ports. However, ports added with ``--vdev`` cannot referred from
-secondary processes.
-
-.. code-block:: console
-
-    # terminal 3
-    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
-        -l 1 -n 4 \
-        --socket-mem 512,512 \
-        --huge-dir=/dev/hugepages \
-        --vdev eth_vhost1,iface=/tmp/sock1  # used as 1st phy port
-        --vdev eth_vhost2,iface=/tmp/sock2  # used as 2nd phy port
-        --proc-type=primary \
-        --base-virtaddr 0x100000000
-        -- \
-        -p 0x03 \
-        -n 10 \
-        -s 192.168.1.100:5555
-
-- EAL options:
-
-  - -l: core list
-  - --socket-mem: Memory size on each of NUMA nodes.
-  - --huge-dir: Path of hugepage dir.
-  - --proc-type: Process type.
-  - --base-virtaddr: Specify base virtual address.
-
-- Application options:
-
-  - -p: Port mask.
-  - -n: Number of ring PMD.
-  - -s: IP address of controller and port prepared for primary.
-
-
-SPP Secondary
--------------
-
-Secondary process behaves as a client of primary process and a worker
-for doing tasks for packet processing. There are several kinds of secondary
-process, for example, simply forwarding between ports or classsifying
-by referring packet header.
-
-This section describes about the simplest ``spp_nfv`` which simply forward
-packets similar to ``l2fwd``.
-
-
-Launch spp_nfv on Host
-~~~~~~~~~~~~~~~~~~~~~~
-
-Run ``spp_nfv`` with options.
-
-.. code-block:: console
-
-    # terminal 4
-    $ cd /path/to/spp
-    $ sudo ./src/nfv/x86_64-native-linuxapp-gcc/spp_nfv \
-        -l 2-3 -n 4 \
-        --proc-type=secondary \
-        -- \
-        -n 1 \
-        -s 192.168.1.100:6666
-
-- EAL options:
-
-  - -l: core list (two cores required)
-  - --proc-type: process type
-
-- Application options:
-
-  - -n: secondary ID
-  - -s: IP address of controller and port prepared for secondary
-
-Secondary ID is used to identify for sending messages and must be
-unique among all of secondaries.
-If you attempt to launch a secondary process with the same ID, it
-is failed.
-
-
-Launch from SPP CLI
-~~~~~~~~~~~~~~~~~~~
-
-You can launch SPP secondary processes from SPP CLI wihtout openning
-other terminals. ``pri; launch`` command is for any of secondary processes
-with specific options. It takes secondary type, ID and options of EAL
-and application itself as similar to launching from terminal.
-Here is an example of launching ``spp_nfv``. You notice that there is no
-``--proc-type secondary`` which should be required for secondary.
-It is added to the options by SPP CLI before launching the process.
-
-.. code-block:: none
-
-    # terminal 2
-    # launch spp_nfv with sec ID 2
-    spp > pri; launch nfv 2 -l 1,2 -m 512 -- -n 2 -s 192.168.1.100:6666
-    Send request to launch nfv:2.
-
-After running this command, you can find ``nfv:2`` is launched
-successfully.
-
-.. code-block:: none
-
-    # terminal 2
-    spp > status
-    - spp-ctl:
-      - address: 192.168.1.100:7777
-    - primary:
-      - status: running
-    - secondary:
-      - processes:
-        1: nfv:2
-
-Instead of displaying log messages in terminal, it outputs the messages
-in a log file. All of log files of secondary processes launched with
-``pri`` are located in ``log/`` directory under the project root.
-The name of log file is found ``log/spp_nfv-2.log``.
-
-.. code-block:: console
-
-    # terminal 5
-    $ tail -f log/spp_nfv-2.log
-    SPP_NFV: Used lcores: 1 2
-    SPP_NFV: entering main loop on lcore 2
-    SPP_NFV: My ID 2 start handling message
-    SPP_NFV: [Press Ctrl-C to quit ...]
-    SPP_NFV: Creating socket...
-    SPP_NFV: Trying to connect ... socket 24
-    SPP_NFV: Connected
-    SPP_NFV: Received string: _get_client_id
-    SPP_NFV: token 0 = _get_client_id
-    SPP_NFV: To Server: {"results":[{"result":"success"}],"client_id":2, ...
-
-
-Launch SPP on VM
-~~~~~~~~~~~~~~~~
-
-To communicate DPDK application running on a VM,
-it is required to create a virtual device for the VM.
-In this instruction, launch a VM with qemu command and
-create ``vhost-user`` and ``virtio-net-pci`` devices on the VM.
-
-Before launching VM, you need to prepare a socket file for creating
-``vhost-user`` device.
-Run ``add`` command with resource UID ``vhost:0`` to create socket file.
-
-.. code-block:: none
-
-    # terminal 2
-    spp > nfv 1; add vhost:0
-
-In this example, it creates socket file with index 0 from ``spp_nfv`` of ID 1.
-Socket file is created as ``/tmp/sock0``.
-It is used as a qemu option to add vhost interface.
-
-Launch VM with ``qemu-system-x86_64`` for x86 64bit architecture.
-Qemu takes many options for defining resources including virtual
-devices. You cannot use this example as it is because some options are
-depend on your environment.
-You should specify disk image with ``-hda``, sixth option in this
-example, and ``qemu-ifup`` script for assigning an IP address for the VM
-to be able to access as 12th line.
-
-.. code-block:: console
-
-    # terminal 5
-    $ sudo qemu-system-x86_64 \
-        -cpu host \
-        -enable-kvm \
-        -numa node,memdev=mem \
-        -mem-prealloc \
-        -hda /path/to/image.qcow2 \
-        -m 4096 \
-        -smp cores=4,threads=1,sockets=1 \
-        -object \
-        memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
-        -device e1000,netdev=net0,mac=00:AD:BE:B3:11:00 \
-        -netdev tap,id=net0,ifname=net0,script=/path/to/qemu-ifup \
-        -nographic \
-        -chardev socket,id=chr0,path=/tmp/sock0 \  # /tmp/sock0
-        -netdev vhost-user,id=net1,chardev=chr0,vhostforce \
-        -device virtio-net-pci,netdev=net1,mac=00:AD:BE:B4:11:00 \
-        -monitor telnet::44911,server,nowait
-
-This VM has two network interfaces.
-``-device e1000`` is a management network port
-which requires ``qemu-ifup`` to activate while launching.
-Management network port is used for login and setup the VM.
-``-device virtio-net-pci`` is created for SPP or DPDK application
-running on the VM.
-
-``vhost-user`` is a backend of ``virtio-net-pci`` which requires
-a socket file ``/tmp/sock0`` created from secondary with ``-chardev``
-option.
-
-For other options, please refer to
-`QEMU User Documentation
-<https://qemu.weilnetz.de/doc/qemu-doc.html>`_.
-
-.. note::
-
-    In general, you need to prepare several qemu images for launcing
-    several VMs, but installing DPDK and SPP for several images is bother
-    and time consuming.
-
-    You can shortcut this tasks by creating a template image and copy it
-    to the VMs. It is just one time for installing for template.
-
-After VM is booted, you install DPDK and SPP in the VM as in the host.
-IP address of the VM is assigned while it is created and you can find
-the address in a file generated from libvirt if you use Ubuntu.
-
-.. code-block:: console
-
-    # terminal 5
-    $ cat /var/lib/libvirt/dnsmasq/virbr0.status
-    [
-        {
-            "ip-address": "192.168.122.100",
-            ...
-
-    # Login VM, install DPDK and SPP
-    $ ssh user@192.168.122.100
-    ...
-
-It is recommended to configure ``/etc/default/grub`` for hugepages and
-reboot the VM after installation.
-
-Finally, login to the VM, bind ports to DPDK and launch ``spp-ctl``
-and ``spp_primamry``.
-You should add ``-b`` option to be accessed from SPP CLI on host.
-
-.. code-block:: console
-
-    # terminal 5
-    $ ssh user@192.168.122.100
-    $ cd /path/to/spp
-    $ python3 src/spp-ctl/spp-ctl -b 192.168.122.100
-    ...
-
-Confirm that virtio interfaces are under the management of DPDK before
-launching DPDK processes.
-
-.. code-block:: console
-
-    # terminal 6
-    $ ssh user@192.168.122.100
-    $ cd /path/to/spp
-    $ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
-        -l 1 -n 4 \
-        -m 1024 \
-        --huge-dir=/dev/hugepages \
-        --proc-type=primary \
-        --base-virtaddr 0x100000000
-        -- \
-        -p 0x03 \
-        -n 6 \
-        -s 192.168.122.100:5555
-
-You can configure SPP running on the VM from SPP CLI.
-Use ``server`` command to switch node under the management.
-
-.. code-block:: none
-
-    # terminal 2
-    # show list of spp-ctl nodes
-    spp > server
-    1: 192.168.1.100:7777 *
-    2: 192.168.122.100:7777
-
-    # change node under the management
-    spp > server 2
-    Switch spp-ctl to "2: 192.168.122.100:7777".
-
-    # confirm node is switched
-    spp > server
-    1: 192.168.1.100:7777
-    2: 192.168.122.100:7777 *
-
-    # configure SPP on VM
-    spp > status
-    ...
-
-Now, you are ready to setup your network environment for DPDK and non-DPDK
-applications with SPP.
-SPP enables users to configure service function chaining between applications
-running on host and VMs.
-Usecases of network configuration are explained in the next chapter.
diff --git a/docs/guides/setup/index.rst b/docs/guides/setup/index.rst
deleted file mode 100644
index bc8d8a6..0000000
--- a/docs/guides/setup/index.rst
+++ /dev/null
@@ -1,13 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2010-2014 Intel Corporation
-
-Setup Guide
-===========
-
-.. toctree::
-   :maxdepth: 2
-   :numbered:
-
-   getting_started
-   howto_use
-   performance_opt
diff --git a/docs/guides/setup/performance_opt.rst b/docs/guides/setup/performance_opt.rst
deleted file mode 100644
index d4a85f1..0000000
--- a/docs/guides/setup/performance_opt.rst
+++ /dev/null
@@ -1,82 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2010-2014 Intel Corporation
-
-Performance Optimization
-========================
-
-Reduce Context Switches
------------------------
-
-Use the ``isolcpus`` Linux kernel parameter to isolate them
-from Linux scheduler to reduce context switches.
-It prevents workloads of other processes than DPDK running on
-reserved cores with ``isolcpus`` parameter.
-
-For Ubuntu 16.04, define ``isolcpus`` in ``/etc/default/grub``.
-
-.. code-block:: console
-
-    GRUB_CMDLINE_LINUX_DEFAULT=“isolcpus=0-3,5,7”
-
-The value of this ``isolcpus`` depends on your environment and usage.
-This example reserves six cores(0,1,2,3,5,7).
-
-
-Optimizing QEMU Performance
----------------------------
-
-QEMU process runs threads for vcpu emulation. It is effective strategy
-for pinning vcpu threads to decicated cores.
-
-To find vcpu threads, you use ``ps`` command to find PID of QEMU process
-and ``pstree`` command for threads launched from QEMU process.
-
-.. code-block:: console
-
-    $ ps ea
-       PID TTY     STAT  TIME COMMAND
-    192606 pts/11  Sl+   4:42 ./x86_64-softmmu/qemu-system-x86_64 -cpu host ...
-
-Run ``pstree`` with ``-p`` and this PID to find all threads launched from QEMU.
-
-.. code-block:: console
-
-    $ pstree -p 192606
-    qemu-system-x86(192606)--+--{qemu-system-x8}(192607)
-                             |--{qemu-system-x8}(192623)
-                             |--{qemu-system-x8}(192624)
-                             |--{qemu-system-x8}(192625)
-                             |--{qemu-system-x8}(192626)
-
-Update affinity by using ``taskset`` command to pin vcpu threads.
-The vcpu threads is listed from the second entry and later.
-In this example, assign PID 192623 to core 4, PID 192624 to core 5
-and so on.
-
-.. code-block:: console
-
-    $ sudo taskset -pc 4 192623
-    pid 192623's current affinity list: 0-31
-    pid 192623's new affinity list: 4
-    $ sudo taskset -pc 5 192624
-    pid 192624's current affinity list: 0-31
-    pid 192624's new affinity list: 5
-    $ sudo taskset -pc 6 192625
-    pid 192625's current affinity list: 0-31
-    pid 192625's new affinity list: 6
-    $ sudo taskset -pc 7 192626
-    pid 192626's current affinity list: 0-31
-    pid 192626's new affinity list: 7
-
-
-Reference
----------
-
-* [1] `Best pinning strategy for latency/performance trade-off
-  <https://www.redhat.com/archives/vfio-users/2017-February/msg00010.html>`_
-* [2] `PVP reference benchmark setup using testpmd
-  <http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html>`_
-* [3] `Enabling Additional Functionality
-  <http://dpdk.org/doc/guides/linux_gsg/enable_func.html>`_
-* [4] `How to get best performance with NICs on Intel platforms
-  <http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html>`_
diff --git a/docs/guides/tools/sppc/getting_started.rst b/docs/guides/tools/sppc/getting_started.rst
index 6a40b12..d92b55f 100644
--- a/docs/guides/tools/sppc/getting_started.rst
+++ b/docs/guides/tools/sppc/getting_started.rst
@@ -17,7 +17,7 @@ Setup DPDK and SPP
 
 First of all, you need to clone DPDK and setup hugepages for running
 DPDK application as described in
-:doc:`../../setup/getting_started`
+:doc:`../../gsg/setup`
 or DPDK's
 `Gettting Started Guide
 <https://dpdk.org/doc/guides/linux_gsg/sys_reqs.html>`_.
@@ -26,7 +26,7 @@ You also need to load kernel modules and bind network ports as in
 <https://dpdk.org/doc/guides/linux_gsg/linux_drivers.html>`_.
 
 Then, as described in
-:doc:`../../setup/getting_started`
+:doc:`../../gsg/install`
 , clone and compile SPP in any directory.
 
 .. code-block:: console
diff --git a/docs/guides/use_cases/spp_nfv.rst b/docs/guides/use_cases/spp_nfv.rst
index 39204e3..31ca4ea 100644
--- a/docs/guides/use_cases/spp_nfv.rst
+++ b/docs/guides/use_cases/spp_nfv.rst
@@ -382,7 +382,7 @@ Pcap PMD is an interface for capturing or restoring traffic.
 For usign pcap PMD, you should set ``CONFIG_RTE_LIBRTE_PMD_PCAP``
 and ``CONFIG_RTE_PORT_PCAP`` to ``y`` and compile DPDK before SPP.
 Refer to
-:ref:`Install DPDK and SPP<install_dpdk_spp>`
+:ref:`Install DPDK and SPP<setup_install_dpdk_spp>`
 for details of setting up.
 
 Pcap PMD has two different streams for rx and tx.
-- 
2.7.4

  parent reply	other threads:[~2019-02-18 11:50 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-18 11:48 [spp] [PATCH 00/20] Remove SPP VF chapter in docs ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 01/20] docs: move design of SPP VF ogawa.yasufumi
2019-02-18 11:48 ` ogawa.yasufumi [this message]
2019-02-18 11:48 ` [spp] [PATCH 03/20] docs: move libvirt setup to gsg ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 04/20] docs: move virsh setup section ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 05/20] docs: move package installation to gsg ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 06/20] docs: move descs of packet copy mode of spp_mirror ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 07/20] docs: move usecase of spp_vf ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 08/20] docs: update usecase of ssh with spp_vf ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 09/20] docs: update how to use for virsh ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 10/20] docs: update usecase of spp_mirror ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 11/20] docs: revise how to use and usecases ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 12/20] docs: move usecase of spp_pcap ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 13/20] docs: remove SPP VF ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 14/20] docs: move image of ICMP usecase of spp_vf ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 15/20] docs: revise labels of image of spp_vf usecase ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 16/20] docs: fix image of spp_mirror monitoring usecase ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 17/20] docs: move image of design of spp_vf ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 18/20] docs: move images of design of mirror and pcap ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 19/20] docs: move image of overview of spp_pcap ogawa.yasufumi
2019-02-18 11:48 ` [spp] [PATCH 20/20] docs: fix in image of spp_mirror monitor usecase ogawa.yasufumi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1550490511-31683-3-git-send-email-ogawa.yasufumi@lab.ntt.co.jp \
    --to=ogawa.yasufumi@lab.ntt.co.jp \
    --cc=ferruh.yigit@intel.com \
    --cc=spp@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).