From: Kevin Traynor <ktraynor@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
yuanhan.liu@linux.intel.com, thomas.monjalon@6wind.com,
john.mcnamara@intel.com, zhiyong.yang@intel.com, dev@dpdk.org
Cc: fbaudin@redhat.com
Subject: Re: [dpdk-dev] [PATCH] doc: introduce PVP reference benchmark
Date: Thu, 24 Nov 2016 11:58:17 +0000 [thread overview]
Message-ID: <ef9f7089-a61e-5e78-cc81-b209ae8b60f8@redhat.com> (raw)
In-Reply-To: <20161123210006.7113-1-maxime.coquelin@redhat.com>
On 11/23/2016 09:00 PM, Maxime Coquelin wrote:
> Having reference benchmarks is important in order to obtain
> reproducible performance figures.
>
> This patch describes required steps to configure a PVP setup
> using testpmd in both host and guest.
>
> Not relying on external vSwitch ease integration in a CI loop by
> not being impacted by DPDK API changes.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
A short template/hint of the main things to report after running could
be useful to help ML discussions about results e.g.
Traffic Generator: IXIA
Acceptable Loss: 100% (i.e. raw throughput test)
DPDK version/commit: v16.11
QEMU version/commit: v2.7.0
Patches applied: <link to patchwork>
CPU: E5-2680 v3, 2.8GHz
Result: x mpps
NIC: ixgbe 82599
> ---
> doc/guides/howto/img/pvp_2nics.svg | 556 +++++++++++++++++++++++++++
> doc/guides/howto/index.rst | 1 +
> doc/guides/howto/pvp_reference_benchmark.rst | 389 +++++++++++++++++++
> 3 files changed, 946 insertions(+)
> create mode 100644 doc/guides/howto/img/pvp_2nics.svg
> create mode 100644 doc/guides/howto/pvp_reference_benchmark.rst
>
<snip>
> +Host tuning
> +~~~~~~~~~~~
I would add turbo boost =disabled on BIOS.
> +
> +#. Append these options to Kernel command line:
> +
> + .. code-block:: console
> +
> + intel_pstate=disable mce=ignore_ce default_hugepagesz=1G hugepagesz=1G hugepages=6 isolcpus=2-7 rcu_nocbs=2-7 nohz_full=2-7 iommu=pt intel_iommu=on
> +
> +#. Disable hyper-threads at runtime if necessary and BIOS not accessible:
> +
> + .. code-block:: console
> +
> + cat /sys/devices/system/cpu/cpu*[0-9]/topology/thread_siblings_list \
> + | sort | uniq \
> + | awk -F, '{system("echo 0 > /sys/devices/system/cpu/cpu"$2"/online")}'
> +
> +#. Disable NMIs:
> +
> + .. code-block:: console
> +
> + echo 0 > /proc/sys/kernel/nmi_watchdog
> +
> +#. Exclude isolated CPUs from the writeback cpumask:
> +
> + .. code-block:: console
> +
> + echo ffffff03 > /sys/bus/workqueue/devices/writeback/cpumask
> +
> +#. Isolate CPUs from IRQs:
> +
> + .. code-block:: console
> +
> + clear_mask=0xfc #Isolate CPU2 to CPU7 from IRQs
> + for i in /proc/irq/*/smp_affinity
> + do
> + echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
> + done
> +
> +Qemu build
> +~~~~~~~~~~
> +
> + .. code-block:: console
> +
> + git clone git://dpdk.org/dpdk
> + cd dpdk
> + export RTE_SDK=$PWD
> + make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +
> +DPDK build
> +~~~~~~~~~~
> +
> + .. code-block:: console
> +
> + git clone git://dpdk.org/dpdk
> + cd dpdk
> + export RTE_SDK=$PWD
> + make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +
> +Testpmd launch
> +~~~~~~~~~~~~~~
> +
> +#. Assign NICs to DPDK:
> +
> + .. code-block:: console
> +
> + modprobe vfio-pci
> + $RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci 0000:11:00.0 0000:11:00.1
> +
> +*Note: Sandy Bridge family seems to have some limitations wrt its IOMMU,
> +giving poor performance results. To achieve good performance on these machines,
> +consider using UIO instead.*
> +
> +#. Launch testpmd application:
> +
> + .. code-block:: console
> +
> + $RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \
> + --vdev 'net_vhost0,iface=/tmp/vhost-user1' \
> + --vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \
> + --portmask=f --disable-hw-vlan -i --rxq=1 --txq=1
> + --nb-cores=4 --forward-mode=io
> +
> +#. In testpmd interactive mode, set the portlist to obtin the right chaining:
> +
> + .. code-block:: console
> +
> + set portlist 0,2,1,3
> + start
> +
> +VM launch
> +~~~~~~~~~
> +
> +The VM may be launched ezither by calling directly QEMU, or by using libvirt.
s/ezither/either
> +
> +#. Qemu way:
> +
> +Launch QEMU with two Virtio-net devices paired to the vhost-user sockets created by testpmd:
> +
> + .. code-block:: console
> +
> + <QEMU path>/bin/x86_64-softmmu/qemu-system-x86_64 \
> + -enable-kvm -cpu host -m 3072 -smp 3 \
> + -chardev socket,id=char0,path=/tmp/vhost-user1 \
> + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> + -device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 \
> + -chardev socket,id=char1,path=/tmp/vhost-user2 \
> + -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
> + -device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 \
> + -object memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
> + -numa node,memdev=mem -mem-prealloc \
> + -net user,hostfwd=tcp::1002$1-:22 -net nic \
> + -qmp unix:/tmp/qmp.socket,server,nowait \
> + -monitor stdio <vm_image>.qcow2
Probably mergeable rx data path =off would want to be tested also when
evaluating any performance improvements/regressions.
> +
> +You can use this qmp-vcpu-pin script to pin vCPUs:
> +
> + .. code-block:: python
> +
> + #!/usr/bin/python
> + # QEMU vCPU pinning tool
> + #
> + # Copyright (C) 2016 Red Hat Inc.
> + #
> + # Authors:
> + # Maxime Coquelin <maxime.coquelin@redhat.com>
> + #
> + # This work is licensed under the terms of the GNU GPL, version 2. See
> + # the COPYING file in the top-level directory
> + import argparse
> + import json
> + import os
> +
> + from subprocess import call
> + from qmp import QEMUMonitorProtocol
> +
> + pinned = []
> +
> + parser = argparse.ArgumentParser(description='Pin QEMU vCPUs to physical CPUs')
> + parser.add_argument('-s', '--server', type=str, required=True,
> + help='QMP server path or address:port')
> + parser.add_argument('cpu', type=int, nargs='+',
> + help='Physical CPUs IDs')
> + args = parser.parse_args()
> +
> + devnull = open(os.devnull, 'w')
> +
> + srv = QEMUMonitorProtocol(args.server)
> + srv.connect()
> +
> + for vcpu in srv.command('query-cpus'):
> + vcpuid = vcpu['CPU']
> + tid = vcpu['thread_id']
> + if tid in pinned:
> + print 'vCPU{}\'s tid {} already pinned, skipping'.format(vcpuid, tid)
> + continue
> +
> + cpuid = args.cpu[vcpuid % len(args.cpu)]
> + print 'Pin vCPU {} (tid {}) to physical CPU {}'.format(vcpuid, tid, cpuid)
> + try:
> + call(['taskset', '-pc', str(cpuid), str(tid)], stdout=devnull)
> + pinned.append(tid)
> + except OSError:
> + print 'Failed to pin vCPU{} to CPU{}'.format(vcpuid, cpuid)
> +
> +
> +That can be used this way, for example to pin 3 vCPUs to CPUs 1, 6 and 7:
I think it would be good to explicitly explain the link you've made on
core numbers in this case between isolcpus, the vCPU pinning above and
the core list in the testpmd cmd line later.
> +
> + .. code-block:: console
> +
> + export PYTHONPATH=$PYTHONPATH:<QEMU path>/scripts/qmp
> + ./qmp-vcpu-pin -s /tmp/qmp.socket 1 6 7
> +
> +#. Libvirt way:
> +
> +Some initial steps are required for libvirt to be able to connect to testpmd's
> +sockets.
> +
> +First, SELinux policy needs to be set to permissiven, as testpmd is run as root
> +(reboot required):
> +
> + .. code-block:: console
> +
> + cat /etc/selinux/config
> +
> + # This file controls the state of SELinux on the system.
> + # SELINUX= can take one of these three values:
> + # enforcing - SELinux security policy is enforced.
> + # permissive - SELinux prints warnings instead of enforcing.
> + # disabled - No SELinux policy is loaded.
> + SELINUX=permissive
> + # SELINUXTYPE= can take one of three two values:
> + # targeted - Targeted processes are protected,
> + # minimum - Modification of targeted policy. Only selected processes are protected.
> + # mls - Multi Level Security protection.
> + SELINUXTYPE=targeted
> +
> +
> +Also, Qemu needs to be run as root, which has to be specified in /etc/libvirt/qemu.conf:
> +
> + .. code-block:: console
> +
> + user = "root"
> +
> +Once the domain created, following snippset is an extract of most important
> +information (hugepages, vCPU pinning, Virtio PCI devices):
> +
> + .. code-block:: xml
> +
> + <domain type='kvm'>
> + <memory unit='KiB'>3145728</memory>
> + <currentMemory unit='KiB'>3145728</currentMemory>
> + <memoryBacking>
> + <hugepages>
> + <page size='1048576' unit='KiB' nodeset='0'/>
> + </hugepages>
> + <locked/>
> + </memoryBacking>
> + <vcpu placement='static'>3</vcpu>
> + <cputune>
> + <vcpupin vcpu='0' cpuset='1'/>
> + <vcpupin vcpu='1' cpuset='6'/>
> + <vcpupin vcpu='2' cpuset='7'/>
> + <emulatorpin cpuset='0'/>
> + </cputune>
> + <numatune>
> + <memory mode='strict' nodeset='0'/>
> + </numatune>
> + <os>
> + <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
> + <boot dev='hd'/>
> + </os>
> + <cpu mode='host-passthrough'>
> + <topology sockets='1' cores='3' threads='1'/>
> + <numa>
> + <cell id='0' cpus='0-2' memory='3145728' unit='KiB' memAccess='shared'/>
> + </numa>
> + </cpu>
> + <devices>
> + <interface type='vhostuser'>
> + <mac address='56:48:4f:53:54:01'/>
> + <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> + <model type='virtio'/>
> + <driver name='vhost' rx_queue_size='256' />
> + <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
> + </interface>
> + <interface type='vhostuser'>
> + <mac address='56:48:4f:53:54:02'/>
> + <source type='unix' path='/tmp/vhost-user2' mode='client'/>
> + <model type='virtio'/>
> + <driver name='vhost' rx_queue_size='256' />
> + <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/>
> + </interface>
> + </devices>
> + </domain>
> +
> +Guest setup
> +...........
> +
> +Guest tuning
> +~~~~~~~~~~~~
> +
> +#. Append these options to Kernel command line:
> +
> + .. code-block:: console
> +
> + default_hugepagesz=1G hugepagesz=1G hugepages=1 intel_iommu=on iommu=pt isolcpus=1,2 rcu_nocbs=1,2 nohz_full=1,2
> +
> +#. Disable NMIs:
> +
> + .. code-block:: console
> +
> + echo 0 > /proc/sys/kernel/nmi_watchdog
> +
> +#. Exclude isolated CPU1 and CPU2 from the writeback wq cpumask:
> +
> + .. code-block:: console
> +
> + echo 1 > /sys/bus/workqueue/devices/writeback/cpumask
> +
> +#. Isolate CPUs from IRQs:
> +
> + .. code-block:: console
> +
> + clear_mask=0x6 #Isolate CPU1 and CPU2 from IRQs
> + for i in /proc/irq/*/smp_affinity
> + do
> + echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
> + done
> +
> +DPDK build
> +~~~~~~~~~~
> +
> + .. code-block:: console
> +
> + git clone git://dpdk.org/dpdk
> + cd dpdk
> + export RTE_SDK=$PWD
> + make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +
> +Testpmd launch
> +~~~~~~~~~~~~~~
> +
> +Probe vfio module without iommu:
> +
> + .. code-block:: console
> +
> + modprobe -r vfio_iommu_type1
> + modprobe -r vfio
> + modprobe vfio enable_unsafe_noiommu_mode=1
> + cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> + modprobe vfio-pci
> +
> +Bind virtio-net devices to DPDK:
> +
> + .. code-block:: console
> +
> + $RTE_SDK/tools/dpdk-devbind.py -b vfio-pci 0000:00:10.0 0000:00:11.0
> +
> +Start testpmd:
> +
> + .. code-block:: console
> +
> + $RTE_SDK/install/bin/testpmd -l 0,1,2 --socket-mem 1024 -n 4 \
> + --proc-type auto --file-prefix pg -- \
> + --portmask=3 --forward-mode=macswap --port-topology=chained \
> + --disable-hw-vlan --disable-rss -i --rxq=1 --txq=1 \
> + --rxd=256 --txd=256 --nb-cores=2 --auto-start
>
next prev parent reply other threads:[~2016-11-24 11:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-23 21:00 Maxime Coquelin
2016-11-24 5:07 ` Yuanhan Liu
2016-11-24 7:35 ` Maxime Coquelin
2016-11-29 10:16 ` Yuanhan Liu
2016-11-29 10:29 ` Maxime Coquelin
2016-11-24 11:58 ` Kevin Traynor [this message]
2016-11-24 12:39 ` Maxime Coquelin
2016-11-24 17:38 ` Mcnamara, John
2016-11-25 9:29 ` Maxime Coquelin
2016-11-28 11:22 ` Thomas Monjalon
2016-11-28 14:02 ` Maxime Coquelin
2016-11-28 14:15 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ef9f7089-a61e-5e78-cc81-b209ae8b60f8@redhat.com \
--to=ktraynor@redhat.com \
--cc=dev@dpdk.org \
--cc=fbaudin@redhat.com \
--cc=john.mcnamara@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=thomas.monjalon@6wind.com \
--cc=yuanhan.liu@linux.intel.com \
--cc=zhiyong.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).