From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C276DA2EFC for ; Mon, 14 Oct 2019 10:12:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 27B8C1C0D2; Mon, 14 Oct 2019 10:12:13 +0200 (CEST) Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) by dpdk.org (Postfix) with ESMTP id 3D9391BFE5 for ; Mon, 14 Oct 2019 10:12:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1571040731; bh=N6sKXxlqVuEoeqwXxzRpKATO3TM4s0HfDlfZvDljXyo=; h=X-UI-Sender-Class:From:To:Subject:Date; b=YwALtqJ5rbOYSBedUrY/WOIo01nSPJldNUt0R0mnu35Ozgnyu97C6heoo9Du/vZVm ESsHAgusdM6Q3AJh+d8chwFzG49WqtBpS7rTbVQhz/yWf19zJ6SYb6HSMoQzfS/ChG J3kvAdzzdbs72flWFJFcZknw40cXh0cozSrLwjaw= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [193.110.49.13] ([193.110.49.13]) by web-mail.gmx.net (3c-app-gmx-bap74.server.lan [172.19.172.174]) (via HTTP); Mon, 14 Oct 2019 10:12:11 +0200 MIME-Version: 1.0 Message-ID: From: "Christian Graf" To: users@dpdk.org Content-Type: text/plain; charset=UTF-8 Date: Mon, 14 Oct 2019 10:12:11 +0200 Importance: normal Sensitivity: Normal X-Priority: 3 X-Provags-ID: V03:K1:nR3Wwc81dptTpcNEzc6tJYnpnK3c2feMTgonsU8jZu9e717BTM9fxCQNGxCxKsJwBLK4K vU2fuC++hAMvIroU3UPH1OH29lq7K/3361kO7aKK6Rij/lxijSVKJhVG3+waRPTXA5lOFYdk98vc Jmbxuj7A6HK0hnJeeqB6Rbe3Cky/iM1/Sj9YrnSMC8sZe7D1c655/pVqI7vuque7IPhE9rZvi7pz ui5ZrU+vjzWfzkCCpCJSzSkoX1K+8emc/LVHk+oxBmz93o5ry0QaI+lNAQXZ9Z2WaQBwUqnan3Ya ds= X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:BuJIjPTpcDI=:8ofbSiuCRFNrDxUP4Lpz/F 4lrnfYwK0TLwZcc8Mn9SB43zMBL3d+Hrj40Om4L+6Z6cvr057BMWxOXKxiTk0oYYaowRCj3az YazNhS4OKY8SNH5MjM8JE7OBKiib7Dqd38lyXMvjVvx3gz9L3m+J8y0UsjcFU3OFaGDEjG6EY lmzAvLABL6nKm5ttklGXDv0HzagjHmRHUtqwAWYH12/2E/IupAvX/ny/Eyh/5TRfjgvnEUKuh j532jMDK3e3NJXB/K0E5M7najJ4NTAZqOVyCqnDIJkSSzrJck6vvAsdR3PNn8rgymH9xzvfP8 LLFCVFjFvP+vbYA4Q0sbHwB3PKR8ziZtA48LQezBlCjKtGbKBifvAuHIuZTmBmt6Pwvkc7egZ Jxq7niLl7lPsc9g+KKZYAVjx7cKt4fKYAEKxrgl8uegXbAAdy2pnj5MIPdUWuhK3neJlLOhS4 k3J4vl+PBQXcXl9bwtavJmtHayR7iHLq8ZUnpK398EC/nf7G/rU0C2MNWE4szyxajTi1HGjVk uWZLS2U+EnhotLTBYprzu8oLX/zZmnLCCc+kv31mwiu+rCFg2XtsnEIouSzw19k/UMyOQeCtx B5bpHWduqPqLYgUnf82qjmkW4SNSgBFzQPslhLsEeaVFtJ2rNLl8TiUXTISewV8A7uN+CCZ+x OEW8WQlXet7FiTSpPLqwTs+4ryWIKhj677nvM3yko1/cqeA== Content-Transfer-Encoding: quoted-printable Subject: [dpdk-users] dpdk vhost-user ovs unexpected low performance X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Dear all, kindly asking for help. I am observing packet-loss with dpdk-enabled OVS running vhost-user interf= aces. As of now I am unclear if it is the dpdk-enabled VNF attached to OVS= which is not capable of consuming the received packets, or if it is the O= VS which does not properly send-out the packets. I guess/hope that I just miss proper tuning of the VNF. So any further hel= p or guidance whould be very much welcome. As my troubleshooting-skills with such setup are limited, I am kindly aski= ng for help and further guidance. many thanks christian Topology =2D---------- vnf (ge-0/0/4)--------[ vhost-user-vmx2 -ovs bridge br-dpdk--vhost-user-vm= x1] -----vnf(ge-0/0/3) Issue =2D------- ovs interface vhost-user-vmx1 claims to have egressed TX 1960035 packets t= owards attached vnf ge-0/0/3 interface. But the vnf (ge-0/0/3) reports only 1599414 packets received (RX) - so som= ehow 400.000 packets get lost. Packet-rate is about 200kpps some general notes =2D---------------------- The vnf and as well OVS run on numa node 0. 4 PMD's threads are pinned and as well l-core to dedicated cores. HT is en= abled. the vnf has all pinned cores as well cores are isolated, hugepages enabled and HT enabled as well ovs version =2D------------ root@bn831x1a-node6-lab:~# sudo ovs-vsctl get Open_vSwitch . dpdk_initiali= zed true root@bn831x1a-node6-lab:~# ovs-vswitchd --version ovs-vswitchd (Open vSwitch) 2.11.0 DPDK 18.11.0 root@bn831x1a-node6-lab:~# sudo ovs-vsctl get Open_vSwitch . dpdk_version "DPDK 18.11.0" root@bn831x1a-node6-lab:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 19.04 Release: 19.04 Codename: disco hugepages enabled =2D---------------------- dcso@bn831x1a-node6-lab:~$ cat /sys/devices/system/node/node*/meminfo|grep= Huge Node 0 AnonHugePages: 1157120 kB Node 0 ShmemHugePages: 0 kB Node 0 HugePages_Total: 40 Node 0 HugePages_Free: 23 Node 0 HugePages_Surp: 0 Node 1 AnonHugePages: 0 kB Node 1 ShmemHugePages: 0 kB Node 1 HugePages_Total: 40 Node 1 HugePages_Free: 39 Node 1 HugePages_Surp: 0 vnf vhost-config =2D------------------ virsh edit vfp-vmx1 ... ... ...
ovs config =2D------------ export PATH=3D$PATH:/usr/share/openvswitch/scripts export DB_SOCK=3D/var/run/openvswitch/db.sock db=3D/var/lib/openvswitch/conf.db schema=3D"/usr/share/openvswitch/vswitch.ovsschema" ovsdb-tool create $db $schema sudo ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=3Dtrue sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=3Dtrue sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask= =3D0x100000 sudo ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=3D2 bridge=3D"br-dpdk" ovs-vsctl del-br $bridge ovs-vsctl add-br $bridge -- set bridge $bridge datapath_type=3Dnetdev ovs-vsctl add-port $bridge vhost-user-vmx1 -- set Interface vhost-user-vmx= 1 type=3Ddpdkvhostuser ovs-vsctl add-port $bridge vhost-user-vmx2 -- set Interface vhost-user-vmx= 2 type=3Ddpdkvhostuser ovs-vsctl set interface vhost-user-vmx1 options:n_rxq=3D2 \ other_config:pmd-rxq-affinity=3D"0:8,1:28" ovs-vsctl set interface vhost-user-vmx2 options:n_rxq=3D2 \ other_config:pmd-rxq-affinity=3D"0:9,1:29" ovs-vsctl set Interface vhost-user-vmx1 options:n_txq_desc=3D2 ovs-vsctl set Interface vhost-user-vmx2 options:n_txq_desc=3D2 some show cmd =2D------------------- root@bn831x1a-node6-lab:~# sudo ovs-vsctl show fff633a5-c11e-4b78-b1cb-dbd1ace240a1 Bridge br-dpdk Port br-dpdk Interface br-dpdk type: internal Port "vhost-user-vmx2" trunks: [10, 11, 12] Interface "vhost-user-vmx2" type: dpdkvhostuser options: {n_rxq=3D"2", n_txq_desc=3D"2"} Port "vhost-user-vmx1" trunks: [10, 11, 12] Interface "vhost-user-vmx1" type: dpdkvhostuser options: {n_rxq=3D"2", n_txq_desc=3D"2"} ovs_version: "2.11.0" port stats =2D----------- root@bn831x1a-node6-lab:~# sudo ovs-ofctl dump-ports br-dpdk OFPST_PORT reply (xid=3D0x2): 3 ports port "vhost-user-vmx2": rx pkts=3D1960035, bytes=3D678163420, drop=3D0,= errs=3D0, frame=3D?, over=3D?, crc=3D? tx pkts=3D1960963, bytes=3D678484021, drop=3D0, errs=3D?, coll= =3D? port "vhost-user-vmx1": rx pkts=3D1960964, bytes=3D678484111, drop=3D0,= errs=3D0, frame=3D?, over=3D?, crc=3D? tx pkts=3D1960035, bytes=3D678163420, drop=3D0, errs=3D?, coll= =3D? port LOCAL: rx pkts=3D0, bytes=3D0, drop=3D0, errs=3D0, frame=3D0, over= =3D0, crc=3D0 tx pkts=3D0, bytes=3D0, drop=3D5, errs=3D0, coll=3D0 interface-stats =2D------------------- root@bn831x1a-node6-lab:~# ovs-vsctl list interface vhost-user-vmx1 _uuid : 9e7d2537-b200-486f-bda1-d04d371c9765 .. mac : [] mac_in_use : "00:00:00:00:00:00" mtu : 1500 mtu_request : [] name : "vhost-user-vmx1" .. options : {n_rxq=3D"2", n_txq_desc=3D"2"} other_config : {pmd-rxq-affinity=3D"0:8,1:28"} statistics : {"rx_1024_to_1522_packets"=3D0, "rx_128_to_255_packe= ts"=3D1, "rx_1523_to_max_packets"=3D0, "rx_1_to_64_packets"=3D4, "rx_256_t= o_511_packets"=3D1960929, "rx_512_to_1023_packets"=3D0, "rx_65_to_127_pack= ets"=3D30, rx_bytes=3D678484111, rx_dropped=3D0, rx_errors=3D0, rx_packets= =3D1960964, tx_bytes=3D678163420, tx_dropped=3D0, tx_packets=3D1960035} status : {features=3D"0x0000000050008000", mode=3Dserver, num= _of_vrings=3D"2", numa=3D"0", socket=3D"/var/run/openvswitch/vhost-user-vm= x1", status=3Dconnected, "vring_0_size"=3D"1024", "vring_1_size"=3D"1024"} type : dpdkvhostuser PMD-stats look good to me =2D------------------------------ root@bn831x1a-node6-lab:~# sudo ovs-appctl dpif-netdev/pmd-stats-show pmd thread numa_id 0 core_id 8: packets received: 1960964 packet recirculations: 0 avg. datapath passes per packet: 1.00 emc hits: 1960909 smc hits: 0 megaflow hits: 50 avg. subtable lookups per megaflow hit: 1.00 miss with success upcall: 5 miss with failed upcall: 0 avg. packets per output batch: 1.03 idle cycles: 2112185253324 (99.75%) processing cycles: 5244829908 (0.25%) avg cycles per packet: 1079790.39 (2117430083232/1960964) avg processing cycles per packet: 2674.62 (5244829908/1960964) pmd thread numa_id 0 core_id 9: packets received: 1960035 packet recirculations: 0 avg. datapath passes per packet: 1.00 emc hits: 1959755 smc hits: 0 megaflow hits: 276 avg. subtable lookups per megaflow hit: 1.92 miss with success upcall: 4 miss with failed upcall: 0 avg. packets per output batch: 1.08 idle cycles: 2112413001990 (99.76%) processing cycles: 4990062087 (0.24%) avg cycles per packet: 1080288.39 (2117403064077/1960035) avg processing cycles per packet: 2545.90 (4990062087/1960035) pmd thread numa_id 0 core_id 28: packets received: 0 packet recirculations: 0 avg. datapath passes per packet: 0.00 emc hits: 0 smc hits: 0 megaflow hits: 0 avg. subtable lookups per megaflow hit: 0.00 miss with success upcall: 0 miss with failed upcall: 0 avg. packets per output batch: 0.00 pmd thread numa_id 0 core_id 29: packets received: 0 packet recirculations: 0 avg. datapath passes per packet: 0.00 emc hits: 0 smc hits: 0 megaflow hits: 0 avg. subtable lookups per megaflow hit: 0.00 miss with success upcall: 0 miss with failed upcall: 0 avg. packets per output batch: 0.00 main thread: packets received: 0 packet recirculations: 0 avg. datapath passes per packet: 0.00 emc hits: 0 smc hits: 0 megaflow hits: 0 avg. subtable lookups per megaflow hit: 0.00 miss with success upcall: 0 miss with failed upcall: 0 avg. packets per output batch: 0.00 PMD-cpu utilization looks rather low =2D---------------------------------------- root@bn831x1a-node6-lab:~# ovs-appctl dpif-netdev/pmd-rxq-show pmd thread numa_id 0 core_id 8: isolated : true port: vhost-user-vmx1 queue-id: 0 pmd usage: 2 % pmd thread numa_id 0 core_id 9: isolated : true port: vhost-user-vmx2 queue-id: 0 pmd usage: 2 % pmd thread numa_id 0 core_id 28: isolated : false pmd thread numa_id 0 core_id 29: isolated : false As said initially, OVS claims to have egressed 1.96mio packets on its inte= rface vhost-user-vmx1, however the vnf attached to this interface is only = seeing 1.58mio packets, so somewhere we are loosing 400.000 packets. As the packet-rate is just about 200kpps, both the OVS and the VNF should = have been able to process packets at given rate. How can I please investigate further where the packets are getting lost? What further tuning thanks christian