From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF85EA0350 for ; Mon, 29 Jun 2020 10:34:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BADB1B13C; Mon, 29 Jun 2020 10:34:07 +0200 (CEST) Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by dpdk.org (Postfix) with ESMTP id A77E91150 for ; Mon, 29 Jun 2020 10:34:06 +0200 (CEST) Received: by mail-wr1-f42.google.com with SMTP id g18so15662374wrm.2 for ; Mon, 29 Jun 2020 01:34:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=xOzIcpC3z+0HVAmfGFUuVovWjLWuLKuR34s52zhnchs=; b=ngv+2EUbklA/my8Nyz/0XnkdmbAUPLiwFxrsZmWCDy82L7Cn5vDhGL3mek3l1CJnCg 3aHEFa38ZzcKyiIN31sujIvlxOQvGFFsqPEXnUqv3YqSqveTTZ4LtR2W3qCAN0WaK9z6 hqpDgxEbDBwidI2GndcJzD35mna6AVvr4eUigLEazfOUDimM1KRWSyKaQyitYPXTdoVw svDtu/jupwigiMU+JONNmtLBTpADJGDW00MvueoZetrpK8Zhtan24NA+q/xnSzqH0t1O Rrl28kRNzBvdZTDEK4oTbEWpKJrSinqhS1SNcQQGlnQxihbsY9p41ILQ1EFm7qiKNxyo 39MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xOzIcpC3z+0HVAmfGFUuVovWjLWuLKuR34s52zhnchs=; b=ot6/t4k1Is19VjoA5aNZ8parng+yGTBqMovoItwxM0aqf+gti4If50ZI70gQwJYsFu 5CiPK/il4gXa7TsEDsrydkarSCwxafCVG82+uNZ1UD4AjMmcOHJ5HF7F5stJMbLCERIy Lb2hhI5mF3kOi2kwQSuzlfCg+FEAus7rrPgFfq5J2IxeRcp6O5iXRNjAlvvbGQ3Y/xKY op9akr1tftt4VOjI1UpAQ7i0AESH++XVUC2/sobBTgLJHfVU58h70s7/J9eDcmQP3lVk WKfGFAjQanOmZTIjJKVLqC6bhcr74akgEYwFEar5rJA7LNV0e9qrLMkyZJhYhbZVhvpp 4lKw== X-Gm-Message-State: AOAM530r1HuJiRKB1tXmHVhzC13bZh9SlYfNEe8jhKVwNpAwMBL2kRKb oPx3iJ2VkNCiG9wL8IZaJ0bLf4M9kF1fVgGqMnY= X-Google-Smtp-Source: ABdhPJwIqB6zeX3EMp/+/n/ypAK2WGZ9pVM1+85GjBDrcje2PZ3MTbpK/BTQsOfkeo4UPcHMosV015Kf+NLnzTWgXhw= X-Received: by 2002:a05:6000:1143:: with SMTP id d3mr5944317wrx.235.1593419646173; Mon, 29 Jun 2020 01:34:06 -0700 (PDT) MIME-Version: 1.0 References: <80f92bdf-7c29-31fa-3be8-dcaee8d9e4a7@linux.vnet.ibm.com> In-Reply-To: <80f92bdf-7c29-31fa-3be8-dcaee8d9e4a7@linux.vnet.ibm.com> From: Vipul Ujawane Date: Mon, 29 Jun 2020 16:33:25 +0800 Message-ID: To: David Christensen Cc: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Poor performance when using OVS with DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" So, > You don't mention how many different flows you're using in the test. Don't be surprised as throughput drops when you move from 1,000 flows to 1,000,000 flows. We currently only have 1 flow, the basic packet forwarding rule. We used pktgen standard built-in packet generation without any pcap or script that would change the flows! Therefore, increasing the number of queues (and cores/queues) cannot help; that flow will always be handled in one specific queue. Increasing the overall core assignment to DPDK should then help, but it does not. On the other hand, we tested again the VM-to-VM performance as well via iperf and the dpdkvhost user interfaces in the KVM machines, but the performance is still bad with the new settings, although a bit increased; it's around 10G now. Note again, it's iperf using TCP and MTU sized packets (but with OVS- Kernel, the performance is 20G with a similar setup). Thanks. On Sat, Jun 27, 2020 at 3:32 AM David Christensen wrote: > > > Why don't you reserve any CPUs for OVS/DPDK or VM usage? All > > > published > > > performance white papers recommend settings for CPU isolation like > > > this > > > Mellanox DPDK performance report: > > > > > > > > > https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf > > < > https://mailtrack.io/trace/link/d820a3bc37ae49e92351ef785e3cfb4c21ab5d3c?url=https%3A%2F%2Ffast.dpdk.org%2Fdoc%2Fperf%2FDPDK_19_08_Mellanox_NIC_performance_report.pdf&userId=1840365&signature=eda177b39475fac6 > > > > > > > > For their test system: > > > > > > isolcpus=24-47 intel_idle.max_cstate=0 processor.max_cstate=0 > > > intel_pstate=disable nohz_full=24-47 > > > rcu_nocbs=24-47 rcu_nocb_poll default_hugepagesz=1G hugepagesz=1G > > > hugepages=64 audit=0 > > > nosoftlockup > > > > > > Using the tuned service (CPU partitioning profile) make this process > > > easier: > > > > > > https://tuned-project.org/ > > < > https://mailtrack.io/trace/link/4f3f47457c7163aacfe6bb108c6eff554be9cd4d?url=https%3A%2F%2Ftuned-project.org%2F&userId=1840365&signature=f1504b405d9e880f > > > > > > > Nice tutorial, thanks for sharing. I have checked it and configured our > > server like this: > > > > isolcpus=12-19 intel_idle.max_cstate=0 processor.max_cstate=0 > > nohz_full=12-19 rcu_nocbs=12-19 intel_pstate=disable > > default_hugepagesz=1G hugepagesz=1G hugepages=24 audit=0 nosoftlockup > > intel_iommu=on iommu=pt rcu_nocb_poll > > > > > > Even though our servers are NUMA-capable and NUMA-aware, we only have > > one CPU installed in one socket. > > And one CPU has 20 physical cores (40 threads), so I figured out to use > > the "top-most" cores for DPDK/OVS, that's the reason of isolcpus=12-19 > > You can never have too many cores. On POWER systems I'll sometimes > reserve 76 out of 80 available cores to improve overall throughput. > > > > > > > > > ./usertools/dpdk-devbind.py --status > > > > Network devices using kernel driver > > > > =================================== > > > > 0000:b3:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2 > > > drv=mlx5_core > > > > unused=igb_uio,vfio-pci > > > > > > > > Due to the way how Mellanox cards and their driver work, I have not > > > bond > > > > igb_uio to the interface, however, uio, igb_uio and vfio-pci kernel > > > modules > > > > are loaded. > > > > > > > > > > > > Relevant part of the VM-config for Qemu/KVM > > > > ------------------------------------------- > > > > > > > > 4096 > > > > > > > > > > > > > > Where did you get these CPU mapping values? x86 systems typically > > > map > > > even-numbered CPUs to one NUMA node and odd-numbered CPUs to a > > > different > > > NUMA node. You generally want to select CPUs from the same NUMA node > > > as > > > the mlx5 NIC you're using for DPDK. > > > > > > You should have at least 4 CPUs in the VM, selected according to the > > > NUMA topology of the system. > > as per my answer above, our system has no secondary NUMA node, all > > mappings are to the same socket/CPU. > > > > > > > > Take a look at this bash script written for Red Hat: > > > > > > > > > https://github.com/ctrautma/RHEL_NIC_QUALIFICATION/blob/ansible/ansible/get_cpulist.sh > > < > https://mailtrack.io/trace/link/cf74e0c69acb6d9a348606606825a0320898824a?url=https%3A%2F%2Fgithub.com%2Fctrautma%2FRHEL_NIC_QUALIFICATION%2Fblob%2Fansible%2Fansible%2Fget_cpulist.sh&userId=1840365&signature=9884f8ea2b42399d > > > > > > > > It gives you a good starting reference which CPUs to select for the > > > OVS/DPDK and VM configurations on your particular system. Also > > > review > > > the Ansible script pvp_ovsdpdk.yml, it provides a lot of other > > > useful > > > steps you might be able to apply to your Debian OS. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > memAccess='shared'/> > > > > > > > > > > > > > > > > > > > > > > path='/usr/local/var/run/openvswitch/vhostuser' > > > > mo$ > > > > > > > > > > > > > > > > > > Is there a requirement for mergeable RX buffers? Some PMDs like > > > mlx5 > > > can take advantage of SSE instructions when this is disabled, > > > yielding > > > better performance. > > Good point, there is no requirement, I just took an example config and > > though it's necessary for the driver queues setting. > > That's how we all learn :-) > > > > > > > > > > > >
> > > function='0x0'$ > > > > > > > > > > > > > > I don't see hugepage usage in the libvirt XML. Something similar to: > > > > > > 8388608 > > > 8388608 > > > > > > > > > > > > > > > > > I did not copy this part of the XML, but we have hugepages configured > > properly. > > > > > > > > > > ----------------------------------- > > > > OVS Start Config > > > > ----------------------------------- > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket- > > > mem="4096,0" > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore- > > > mask=0xff > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0e > > > > > > These two masks shouldn't overlap: > > > > > > https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/ > > < > https://mailtrack.io/trace/link/6c59473e0a8547a9cb80d8f52f9cf5190a0712f6?url=https%3A%2F%2Fdevelopers.redhat.com%2Fblog%2F2017%2F06%2F28%2Fovs-dpdk-parameters-dealing-with-multi-numa%2F&userId=1840365&signature=b01114dc094e5fb9 > > > > > > > Thanks, this did really help me understand in which order these > > commands should be issued. > > > > So, the problem now is the following. > > I did all the changes you shared, and started OVS/DPDK in a proper way > > and set these features: > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket- > > mem="8192,0" > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore- > > mask=0x01000 > > > > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true > > > > and, finally this: > > ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu- > > mask=0x0e000 > > > > The documentation you shared say this last one can be even set during > > runtime. So, I was playing with it to see there is any change. > > > > I did not start any VM on top of OVS/DPDK, just set up a port forward > > rule (in_port=1, actions=output:IN_PORT), since I only have one > > physical ports on each mellanox card. > > Then, I generated traffic from the other server towards OVS > > Using pktsize 64B, the max throughput Pktgen reports is 8Gbps. > > In particular, I got these metrics: > > Size Sent_pps Recv_pps Recv_Gbps > > 64B 93M 11M ~8 > > 128B 65M 12.5M ~15 > > 256B 42.5M 12.3M ~27 > > 512B 23.5M 11.9M ~51 > > 1024B 11.9M 10M ~83 > > 1280B 9.6M 8.3M ~86 > > 1500B 8.3M 6.7M ~82 > > > > It's quite interesting that for 64B, the pps is less than for greater > > sizes. Because PPS should be the practical limitation in throughput, > > and according to the packet size we can count the throughput in Gbps. > > Looking at 64B performance gives you a sense of the per-packet overhead > associated with the DPDK framework and your application. At 100Gb/s > line rate, 64B frames will arrive every 6.72ns. Since your received PPS > is peaking around 12.5MPPS I'd guess that it's taking about 80ns of CPU > time per frame. I don't know how well OVS scales with additional CPUs, > something to look at. > > You don't mention how many different flows you're using in the test. > Don't be surprised as throughput drops when you move from 1,000 flows to > 1,000,000 flows. > > It's likely that most of your frame loss is due to the NICs RX buffers > overflowing and dropping frames due to back pressure (i.e. DPDK/OVS > can't process packets fast enough). Look the the mlx5's hardware > statistics to verify. > > You may be able to improve the performance by increasing the number of > RX queues and RX descriptors per queue, and assigning more lcores to > match the number of queues, allowing the work to be spread more evenly > and reducing buffer overflows. This often works when running testpmd > alone since the app overhead is low but has less effect on OVS > perforamnce. You might consider benchmarking testpmd alone vs OVS/DPDK > to understand the OVS overhead. > > > > > Anyway, OVS-DPDK have 3 cores to use, but only one rx queue is assigned > > to the port (so, basically --- as `top` also shows --- it is the one- > > core performance. > > Increasing the number of RX queues/descriptors and assigning a dedicated > lcore to each queue will generally improve performance if your > bottleneck is RX in the PMD. > > > Increasing the cores did not help, and the performance remained the > > same. Is this performance normal for OVS/DPDK? > > That's been my experience, though there are other who have more > experience with performance testing OVS. The platform matters. Look > for existing whitepapers and compare your system configuration to theirs > to see what you need to achieve the performance you're looking for. > > Dave > -- Vipul Ujawane Pre-Final Year Undergraduate Department of Industrial and Systems Engineering Indian Institute of Technology, Kharagpur